The document provides an overview of graphics systems and their components. It discusses four major tasks for rendering geometric objects: modeling, geometric processing, rasterization, and hidden surface removal. It also outlines the major sections which discuss input devices, hard-copy devices, video display devices, and graphics workstations.
Introduction to computer graphics part 2Ankit Garg
This document discusses cathode ray tubes (CRTs) and how they work as display devices for computer graphics. It explains that CRTs contain an electron gun that emits a stream of electrons which are focused into a beam and directed to specific points on the phosphor-coated front of the picture tube. When the electron beam hits a phosphor dot, it glows proportionally to the beam strength. Color CRTs use three electron guns and a shadow mask to separately excite red, green, and blue phosphor dots, allowing for color displays. The document also covers other properties of CRTs like resolution, persistence, and aspect ratio.
Introduction to computer graphics part 2Ankit Garg
This document discusses cathode ray tubes (CRTs) and how they work as display devices for computer graphics. It explains that CRTs contain an electron gun that emits a stream of electrons which are focused into a beam and directed to specific points on the phosphor-coated front of the picture tube. When the electron beam hits a phosphor dot, it glows proportionally to the beam strength. Color CRTs use three electron guns and a shadow mask to separately excite red, green, and blue phosphor dots, allowing for color displays. The document also covers other properties of CRTs like resolution, persistence, and aspect ratio.
This document provides information on different types of display devices and monitor technologies. It discusses cathode ray tube (CRT) displays, including their structure, working principle, and technologies such as raster scan and vector scan displays. Liquid crystal displays (LCD) and plasma displays are also mentioned. Key aspects of displays such as pixels, resolution, size, viewing angle, response time, and brightness are defined. CRTs are described as having advantages like high resolution and wide viewing angles, but also disadvantages like large thickness and weight.
This document provides information on different types of display devices and monitor technologies. It discusses cathode ray tube (CRT) displays, including their structure, working principle, and technologies such as raster scan and vector scan displays. Liquid crystal displays (LCD) and plasma displays are also mentioned. Key aspects of displays such as pixels, resolution, size, viewing angle, response time, and brightness are defined. CRTs are described as having advantages like high resolution and wide viewing angles, but also disadvantages like large thickness and weight.
Video display devices use various technologies to visually present electronic information. Common types include CRT, LCD, LED, and plasma displays. CRTs use an electron gun to excite phosphors on the screen and were widely used in monitors and TVs. They can operate in raster or random scan modes. Color CRTs use shadow mask or beam penetration methods. Flat panel displays like LCDs are thinner than CRTs and use light modulation rather than emission to display images.
Video display devices use various technologies to visually present electronic information. Common types include CRT, LCD, LED, and plasma displays. CRTs use an electron gun to excite phosphors on the screen and were widely used in monitors and TVs. They can operate in raster or random scan modes. Color CRTs use shadow mask or beam penetration methods. Flat panel displays like LCDs are thinner than CRTs and use light modulation rather than emission to display images.
This document provides an overview of graphics display systems. It discusses the basic components and operation of cathode ray tube (CRT) displays, including the electron gun, focusing and deflection systems. It describes the refresh process of raster-scan CRTs and how random-scan CRTs work. Color CRT monitors are discussed, specifically the beam penetration and shadow mask methods. Key characteristics like resolution, persistence and aspect ratio are also summarized.
This document provides an overview of graphics display systems. It discusses the basic components and operation of cathode ray tube (CRT) displays, including the electron gun, focusing and deflection systems. It describes the refresh process of raster-scan CRTs and how random-scan CRTs work. Color CRT monitors are discussed, specifically the beam penetration and shadow mask methods. Key characteristics like resolution, persistence and aspect ratio are also summarized.
Introduction to computer graphics part 2Ankit Garg
This document discusses cathode ray tubes (CRTs) and how they work as display devices for computer graphics. It explains that CRTs contain an electron gun that emits a stream of electrons which are focused into a beam and directed to specific points on the phosphor-coated front of the picture tube. When the electron beam hits a phosphor dot, it glows proportionally to the beam strength. Color CRTs use three electron guns and a shadow mask to separately excite red, green, and blue phosphor dots, allowing for color displays. The document also covers other properties of CRTs like resolution, persistence, and aspect ratio.
Introduction to computer graphics part 2Ankit Garg
This document discusses cathode ray tubes (CRTs) and how they work as display devices for computer graphics. It explains that CRTs contain an electron gun that emits a stream of electrons which are focused into a beam and directed to specific points on the phosphor-coated front of the picture tube. When the electron beam hits a phosphor dot, it glows proportionally to the beam strength. Color CRTs use three electron guns and a shadow mask to separately excite red, green, and blue phosphor dots, allowing for color displays. The document also covers other properties of CRTs like resolution, persistence, and aspect ratio.
This document provides information on different types of display devices and monitor technologies. It discusses cathode ray tube (CRT) displays, including their structure, working principle, and technologies such as raster scan and vector scan displays. Liquid crystal displays (LCD) and plasma displays are also mentioned. Key aspects of displays such as pixels, resolution, size, viewing angle, response time, and brightness are defined. CRTs are described as having advantages like high resolution and wide viewing angles, but also disadvantages like large thickness and weight.
This document provides information on different types of display devices and monitor technologies. It discusses cathode ray tube (CRT) displays, including their structure, working principle, and technologies such as raster scan and vector scan displays. Liquid crystal displays (LCD) and plasma displays are also mentioned. Key aspects of displays such as pixels, resolution, size, viewing angle, response time, and brightness are defined. CRTs are described as having advantages like high resolution and wide viewing angles, but also disadvantages like large thickness and weight.
Video display devices use various technologies to visually present electronic information. Common types include CRT, LCD, LED, and plasma displays. CRTs use an electron gun to excite phosphors on the screen and were widely used in monitors and TVs. They can operate in raster or random scan modes. Color CRTs use shadow mask or beam penetration methods. Flat panel displays like LCDs are thinner than CRTs and use light modulation rather than emission to display images.
Video display devices use various technologies to visually present electronic information. Common types include CRT, LCD, LED, and plasma displays. CRTs use an electron gun to excite phosphors on the screen and were widely used in monitors and TVs. They can operate in raster or random scan modes. Color CRTs use shadow mask or beam penetration methods. Flat panel displays like LCDs are thinner than CRTs and use light modulation rather than emission to display images.
This document provides an overview of graphics display systems. It discusses the basic components and operation of cathode ray tube (CRT) displays, including the electron gun, focusing and deflection systems. It describes the refresh process of raster-scan CRTs and how random-scan CRTs work. Color CRT monitors are discussed, specifically the beam penetration and shadow mask methods. Key characteristics like resolution, persistence and aspect ratio are also summarized.
This document provides an overview of graphics display systems. It discusses the basic components and operation of cathode ray tube (CRT) displays, including the electron gun, focusing and deflection systems. It describes the refresh process of raster-scan CRTs and how random-scan CRTs work. Color CRT monitors are discussed, specifically the beam penetration and shadow mask methods. Key characteristics like resolution, persistence and aspect ratio are also summarized.
Model 1 multimedia graphics and animation introduction (1)Rahul Borate
Graphics controller
9 Refreshing of screen is
required.
Refreshing of screen is not required.
10 Suitable for TV, monitor. Suitable for CAD/CAM application,
scientific visualization.
Model 1 multimedia graphics and animation introduction (1)Rahul Borate
Graphics controller
9 Refreshing of screen is
required.
Refreshing of screen is not required.
10 Suitable for TV, monitor. Suitable for CAD/CAM application,
scientific visualization.
Computer graphics involves rendering pictures, charts, and graphs on computers rather than just text. It has many applications including movies, games, medical imaging, CAD, education, and simulations. Computer graphics uses pixels - the smallest display elements - to represent images on screens. There are two main types: interactive graphics which allow user input, and passive graphics which do not. Raster scan displays refresh images by sweeping an electron beam across the screen in lines, while random scan displays draw images line by line. Algorithms like DDA and Bresenham's are used to efficiently render lines and circles of pixels.
Computer graphics involves rendering pictures, charts, and graphs on computers rather than just text. It has many applications including movies, games, medical imaging, CAD, education, and simulations. Computer graphics uses pixels - the smallest display elements - to represent images on screens. There are two main types: interactive graphics which allow user input, and passive graphics which do not. Raster scan displays refresh images by sweeping an electron beam across the screen in lines, while random scan displays draw images line by line. Algorithms like DDA and Bresenham's are used to efficiently render lines and circles of pixels.
Computer graphics uses computers to draw and display pictures, graphics, and data in pictorial form. It expresses data visually instead of just text. Computer graphics is used in movies, games, medical imaging, design, education, simulators, art, presentations, image processing, and graphical user interfaces. Pixels are the smallest display elements on a screen, each with an intensity and color value. Interactive graphics allow user input to modify images, while passive graphics do not. Common display devices are CRT monitors which use electron beams to excite phosphors and LCD screens which use pixels to control light transmission. Algorithms like DDA and Bresenham's are used to draw lines on these displays.
Computer graphics uses computers to draw and display pictures, graphics, and data in pictorial form. It expresses data visually instead of just text. Computer graphics is used in movies, games, medical imaging, design, education, simulators, art, presentations, image processing, and graphical user interfaces. Pixels are the smallest display elements on a screen, each with an intensity and color value. Interactive graphics allow user input to modify images, while passive graphics do not. Common display devices are CRT monitors which use electron beams to excite phosphors and LCD screens which use pixels to control light transmission. Algorithms like DDA and Bresenham's are used to draw lines on these displays.
The document provides information on different types of display devices used in computer graphics, including CRT, color CRT monitors, direct view storage tubes, and flat panel displays. It describes the key components and working of CRTs, including the electron gun, phosphor coating, control grid, deflection plates, and techniques for color CRT monitors. Raster scan and random scan are introduced as techniques for producing images on CRT screens. Details are provided on components like shadow mask and refresh buffer used in raster scan systems.
The document provides information on different types of display devices used in computer graphics, including CRT, color CRT monitors, direct view storage tubes, and flat panel displays. It describes the key components and working of CRTs, including the electron gun, phosphor coating, control grid, deflection plates, and techniques for color CRT monitors. Raster scan and random scan are introduced as techniques for producing images on CRT screens. Details are provided on components like shadow mask and refresh buffer used in raster scan systems.
The document discusses various display devices including CRT, flat panel displays, and their components and technologies. CRTs use an electron gun and phosphor-coated screen to create images and come in random scan and raster scan varieties. Components include the electron gun, control electrodes, focusing system, and deflection yoke. Flat panel displays are thinner than CRTs and include LCD and plasma displays.
The document discusses various display devices including CRT, flat panel displays, and their components and technologies. CRTs use an electron gun and phosphor-coated screen to create images and come in random scan and raster scan varieties. Components include the electron gun, control electrodes, focusing system, and deflection yoke. Flat panel displays are thinner than CRTs and include LCD and plasma displays.
This document summarizes computer graphics and display devices. It discusses that computer graphics involves displaying and manipulating images and data using a computer. A typical graphics system includes a host computer, display devices like monitors, and input devices like keyboards and mice. Common applications of computer graphics include GUIs, charts, CAD/CAM, maps, multimedia, and more. Display technologies discussed include CRT monitors, LCD panels, and other devices. Key aspects of CRT monitors like refresh rate, resolution, and bandwidth are also summarized.
This document summarizes computer graphics and display devices. It discusses that computer graphics involves displaying and manipulating images and data using a computer. A typical graphics system includes a host computer, display devices like monitors, and input devices like keyboards and mice. Common applications of computer graphics include GUIs, charts, CAD/CAM, maps, multimedia, and more. Display technologies discussed include CRT monitors, LCD panels, and other devices. Key aspects of CRT monitors like refresh rate, resolution, and bandwidth are also summarized.
This document provides an overview of graphics systems including video display devices, input devices, and raster-scan systems. It describes cathode ray tube monitors as the primary output device and discusses raster-scan and random-scan display principles. Color CRT monitors use color phosphors and shadow masks or electron guns to produce color. Flat panel displays like plasma panels and LCDs are also covered. Common input devices include mice, keyboards, tablets, and touchscreens. Raster-scan systems use a frame buffer in video memory that is refreshed by a video controller to display an image on a monitor.
This document provides an overview of graphics systems including video display devices, input devices, and raster-scan systems. It describes cathode ray tube monitors as the primary output device and discusses raster-scan and random-scan display principles. Color CRT monitors use color phosphors and shadow masks or electron guns to produce color. Flat panel displays like plasma panels and LCDs are also covered. Common input devices include mice, keyboards, tablets, and touchscreens. Raster-scan systems use a frame buffer in video memory that is refreshed by a video controller to display an image on a monitor.
CG03 Random Raster Scan displays and Color CRTs.ppsxjyoti_lakhani
The document discusses different types of graphics displays. It describes raster-scan displays, which use an electron beam that sweeps across the screen from top to bottom to display an image. Picture definition is stored in a frame buffer. It also describes random-scan displays, which direct the electron beam only where lines need to be drawn. Color CRT monitors use phosphors and a shadow mask to display color. Flat panel displays like plasma panels, thin-film electroluminescent displays, and liquid crystal displays provide thinner alternatives to CRTs.
CG03 Random Raster Scan displays and Color CRTs.ppsxjyoti_lakhani
The document discusses different types of graphics displays. It describes raster-scan displays, which use an electron beam that sweeps across the screen from top to bottom to display an image. Picture definition is stored in a frame buffer. It also describes random-scan displays, which direct the electron beam only where lines need to be drawn. Color CRT monitors use phosphors and a shadow mask to display color. Flat panel displays like plasma panels, thin-film electroluminescent displays, and liquid crystal displays provide thinner alternatives to CRTs.
In a raster scan display, the screen is divided into a grid of pixels that are scanned line by line from top to bottom. Each pixel is either on or off, controlled by values stored in a frame buffer. The electron beam scans across each line from left to right, then returns to the left side to draw the next line, in a process called horizontal retrace. After completing the frame, the beam returns to the top left corner for the next frame during vertical retrace. Interlacing displays every other line to reduce flicker.
In a raster scan display, the screen is divided into a grid of pixels that are scanned line by line from top to bottom. Each pixel is either on or off, controlled by values stored in a frame buffer. The electron beam scans across each line from left to right, then returns to the left side to draw the next line, in a process called horizontal retrace. After completing the frame, the beam returns to the top left corner for the next frame during vertical retrace. Interlacing displays every other line to reduce flicker.
The document summarizes raster scan and random scan displays. Raster scan displays use an electron beam that sweeps across the screen from top to bottom to generate pixels based on values stored in a refresh buffer. Random scan displays directly draw images using an electron beam without a fixed pattern, storing only line drawing instructions. The key differences are that raster scan is used for realistic images due to storing intensity values while random scan has higher resolution but is limited to line drawings. Both use a cathode ray tube containing an electron gun, deflection coils, and phosphor screen.
The document summarizes raster scan and random scan displays. Raster scan displays use an electron beam that sweeps across the screen from top to bottom to generate pixels based on values stored in a refresh buffer. Random scan displays directly draw images using an electron beam without a fixed pattern, storing only line drawing instructions. The key differences are that raster scan is used for realistic images due to storing intensity values while random scan has higher resolution but is limited to line drawings. Both use a cathode ray tube containing an electron gun, deflection coils, and phosphor screen.
The document describes a proposed student information system that would allow institutions to more easily manage student data. It would include functions for recording, searching, modifying, and deleting student records. The system would use a prototyping model since requirements are not yet fully defined. It then provides details on the hardware, software, and functional requirements including use of a SQL database, Windows OS, and securing student data.
The document describes a proposed student information system that would allow institutions to more easily manage student data. It would include functions for recording, searching, modifying, and deleting student records. The system would use a prototyping model since requirements are not yet fully defined. It then provides details on the hardware, software, and functional requirements including use of a SQL database, Windows OS, and securing student data.
Model 1 multimedia graphics and animation introduction (1)Rahul Borate
Graphics controller
9 Refreshing of screen is
required.
Refreshing of screen is not required.
10 Suitable for TV, monitor. Suitable for CAD/CAM application,
scientific visualization.
Model 1 multimedia graphics and animation introduction (1)Rahul Borate
Graphics controller
9 Refreshing of screen is
required.
Refreshing of screen is not required.
10 Suitable for TV, monitor. Suitable for CAD/CAM application,
scientific visualization.
Computer graphics involves rendering pictures, charts, and graphs on computers rather than just text. It has many applications including movies, games, medical imaging, CAD, education, and simulations. Computer graphics uses pixels - the smallest display elements - to represent images on screens. There are two main types: interactive graphics which allow user input, and passive graphics which do not. Raster scan displays refresh images by sweeping an electron beam across the screen in lines, while random scan displays draw images line by line. Algorithms like DDA and Bresenham's are used to efficiently render lines and circles of pixels.
Computer graphics involves rendering pictures, charts, and graphs on computers rather than just text. It has many applications including movies, games, medical imaging, CAD, education, and simulations. Computer graphics uses pixels - the smallest display elements - to represent images on screens. There are two main types: interactive graphics which allow user input, and passive graphics which do not. Raster scan displays refresh images by sweeping an electron beam across the screen in lines, while random scan displays draw images line by line. Algorithms like DDA and Bresenham's are used to efficiently render lines and circles of pixels.
Computer graphics uses computers to draw and display pictures, graphics, and data in pictorial form. It expresses data visually instead of just text. Computer graphics is used in movies, games, medical imaging, design, education, simulators, art, presentations, image processing, and graphical user interfaces. Pixels are the smallest display elements on a screen, each with an intensity and color value. Interactive graphics allow user input to modify images, while passive graphics do not. Common display devices are CRT monitors which use electron beams to excite phosphors and LCD screens which use pixels to control light transmission. Algorithms like DDA and Bresenham's are used to draw lines on these displays.
Computer graphics uses computers to draw and display pictures, graphics, and data in pictorial form. It expresses data visually instead of just text. Computer graphics is used in movies, games, medical imaging, design, education, simulators, art, presentations, image processing, and graphical user interfaces. Pixels are the smallest display elements on a screen, each with an intensity and color value. Interactive graphics allow user input to modify images, while passive graphics do not. Common display devices are CRT monitors which use electron beams to excite phosphors and LCD screens which use pixels to control light transmission. Algorithms like DDA and Bresenham's are used to draw lines on these displays.
The document provides information on different types of display devices used in computer graphics, including CRT, color CRT monitors, direct view storage tubes, and flat panel displays. It describes the key components and working of CRTs, including the electron gun, phosphor coating, control grid, deflection plates, and techniques for color CRT monitors. Raster scan and random scan are introduced as techniques for producing images on CRT screens. Details are provided on components like shadow mask and refresh buffer used in raster scan systems.
The document provides information on different types of display devices used in computer graphics, including CRT, color CRT monitors, direct view storage tubes, and flat panel displays. It describes the key components and working of CRTs, including the electron gun, phosphor coating, control grid, deflection plates, and techniques for color CRT monitors. Raster scan and random scan are introduced as techniques for producing images on CRT screens. Details are provided on components like shadow mask and refresh buffer used in raster scan systems.
The document discusses various display devices including CRT, flat panel displays, and their components and technologies. CRTs use an electron gun and phosphor-coated screen to create images and come in random scan and raster scan varieties. Components include the electron gun, control electrodes, focusing system, and deflection yoke. Flat panel displays are thinner than CRTs and include LCD and plasma displays.
The document discusses various display devices including CRT, flat panel displays, and their components and technologies. CRTs use an electron gun and phosphor-coated screen to create images and come in random scan and raster scan varieties. Components include the electron gun, control electrodes, focusing system, and deflection yoke. Flat panel displays are thinner than CRTs and include LCD and plasma displays.
This document summarizes computer graphics and display devices. It discusses that computer graphics involves displaying and manipulating images and data using a computer. A typical graphics system includes a host computer, display devices like monitors, and input devices like keyboards and mice. Common applications of computer graphics include GUIs, charts, CAD/CAM, maps, multimedia, and more. Display technologies discussed include CRT monitors, LCD panels, and other devices. Key aspects of CRT monitors like refresh rate, resolution, and bandwidth are also summarized.
This document summarizes computer graphics and display devices. It discusses that computer graphics involves displaying and manipulating images and data using a computer. A typical graphics system includes a host computer, display devices like monitors, and input devices like keyboards and mice. Common applications of computer graphics include GUIs, charts, CAD/CAM, maps, multimedia, and more. Display technologies discussed include CRT monitors, LCD panels, and other devices. Key aspects of CRT monitors like refresh rate, resolution, and bandwidth are also summarized.
This document provides an overview of graphics systems including video display devices, input devices, and raster-scan systems. It describes cathode ray tube monitors as the primary output device and discusses raster-scan and random-scan display principles. Color CRT monitors use color phosphors and shadow masks or electron guns to produce color. Flat panel displays like plasma panels and LCDs are also covered. Common input devices include mice, keyboards, tablets, and touchscreens. Raster-scan systems use a frame buffer in video memory that is refreshed by a video controller to display an image on a monitor.
This document provides an overview of graphics systems including video display devices, input devices, and raster-scan systems. It describes cathode ray tube monitors as the primary output device and discusses raster-scan and random-scan display principles. Color CRT monitors use color phosphors and shadow masks or electron guns to produce color. Flat panel displays like plasma panels and LCDs are also covered. Common input devices include mice, keyboards, tablets, and touchscreens. Raster-scan systems use a frame buffer in video memory that is refreshed by a video controller to display an image on a monitor.
CG03 Random Raster Scan displays and Color CRTs.ppsxjyoti_lakhani
The document discusses different types of graphics displays. It describes raster-scan displays, which use an electron beam that sweeps across the screen from top to bottom to display an image. Picture definition is stored in a frame buffer. It also describes random-scan displays, which direct the electron beam only where lines need to be drawn. Color CRT monitors use phosphors and a shadow mask to display color. Flat panel displays like plasma panels, thin-film electroluminescent displays, and liquid crystal displays provide thinner alternatives to CRTs.
CG03 Random Raster Scan displays and Color CRTs.ppsxjyoti_lakhani
The document discusses different types of graphics displays. It describes raster-scan displays, which use an electron beam that sweeps across the screen from top to bottom to display an image. Picture definition is stored in a frame buffer. It also describes random-scan displays, which direct the electron beam only where lines need to be drawn. Color CRT monitors use phosphors and a shadow mask to display color. Flat panel displays like plasma panels, thin-film electroluminescent displays, and liquid crystal displays provide thinner alternatives to CRTs.
In a raster scan display, the screen is divided into a grid of pixels that are scanned line by line from top to bottom. Each pixel is either on or off, controlled by values stored in a frame buffer. The electron beam scans across each line from left to right, then returns to the left side to draw the next line, in a process called horizontal retrace. After completing the frame, the beam returns to the top left corner for the next frame during vertical retrace. Interlacing displays every other line to reduce flicker.
In a raster scan display, the screen is divided into a grid of pixels that are scanned line by line from top to bottom. Each pixel is either on or off, controlled by values stored in a frame buffer. The electron beam scans across each line from left to right, then returns to the left side to draw the next line, in a process called horizontal retrace. After completing the frame, the beam returns to the top left corner for the next frame during vertical retrace. Interlacing displays every other line to reduce flicker.
The document summarizes raster scan and random scan displays. Raster scan displays use an electron beam that sweeps across the screen from top to bottom to generate pixels based on values stored in a refresh buffer. Random scan displays directly draw images using an electron beam without a fixed pattern, storing only line drawing instructions. The key differences are that raster scan is used for realistic images due to storing intensity values while random scan has higher resolution but is limited to line drawings. Both use a cathode ray tube containing an electron gun, deflection coils, and phosphor screen.
The document summarizes raster scan and random scan displays. Raster scan displays use an electron beam that sweeps across the screen from top to bottom to generate pixels based on values stored in a refresh buffer. Random scan displays directly draw images using an electron beam without a fixed pattern, storing only line drawing instructions. The key differences are that raster scan is used for realistic images due to storing intensity values while random scan has higher resolution but is limited to line drawings. Both use a cathode ray tube containing an electron gun, deflection coils, and phosphor screen.
The document describes a proposed student information system that would allow institutions to more easily manage student data. It would include functions for recording, searching, modifying, and deleting student records. The system would use a prototyping model since requirements are not yet fully defined. It then provides details on the hardware, software, and functional requirements including use of a SQL database, Windows OS, and securing student data.
The document describes a proposed student information system that would allow institutions to more easily manage student data. It would include functions for recording, searching, modifying, and deleting student records. The system would use a prototyping model since requirements are not yet fully defined. It then provides details on the hardware, software, and functional requirements including use of a SQL database, Windows OS, and securing student data.
The document describes a proposed student information system that would allow institutions to more easily manage student data. It would include functions for recording, searching, modifying, and deleting student records. The system would use a prototyping model since requirements are not yet fully defined. It then provides details on the hardware, software, and functional requirements including use of a SQL database, Windows OS, and securing student data.
This document describes an undergraduate paper on developing a Student Information System (SIS) using Android. The paper outlines the requirements, system design, data flow, security considerations, and implementation details of the SIS. Key elements include four user roles (admin, teacher, student, parent), functionality for student data management, course management, and payments. The system design uses a browser/server model with a MySQL database. Security is ensured through user authentication and authorization.
This document describes an undergraduate paper about developing a student information system (SIS) using Android. The paper outlines the requirements, system design, data flow, security considerations, and implementation details of the SIS. Key aspects of the SIS include functions for administration, teachers, students, and parents with different access levels. The system will manage student data, courses, assignments, announcements and conduct online questionnaires. The design uses a browser/server model with a MySQL database. Future enhancements could include additional modules and using multiple programming languages.
This document discusses the W3C's vision for the Web of Things (WoT) and their efforts to standardize it. The WoT aims to make IoT development easier by treating "things" as resources that can be discovered and controlled via web APIs and scripts. The W3C is developing standards for thing descriptions, scripting APIs, and security to allow interoperability across platforms and reduce data silos. Their goal is for the WoT to fuel an open market of IoT applications and services in the same way the web has for software.
Halftoning is the process of converting a greyscale image to a binary image made up of black and white dots. In newspapers, halftoning simulates greyscale using patterns of black dots of varying sizes on a white background. Traditionally, halftoning was done photographically by projecting an image through a halftone screen with an etched grid onto film. Different screen frequencies control dot size. Digital halftoning techniques include patterning, which replaces each pixel with a pattern from a binary font, and dithering, which thresholds the image against a dither matrix to determine black and white pixels.
This document discusses illumination models and shading techniques used in 3D rendering. It describes common illumination models including ambient illumination, diffuse reflection, and specular reflection. It also covers different polygon rendering methods like flat shading, Gouraud shading, and Phong shading. Examples are provided to illustrate the different illumination models and how they are used in rendering 3D objects and surfaces under various lighting conditions.
The document discusses illumination models and surface rendering methods in computer graphics. It provides information on several key topics:
1. Illumination models (also called lighting models or shading models) are used to calculate the color and intensity of illuminated surfaces. Common illumination models include ambient light, diffuse reflection, and specular reflection (Phong model).
2. Surface rendering methods determine the pixel colors for all positions in a 3D scene. Polygon rendering methods approximate object surfaces with polygons and calculate color/intensity at polygon vertices (Gouraud) or interior points (Phong).
3. Additional concepts covered include light sources, reflection, transparency, shadows, color, and intensity attenuation with distance from light sources.
Sachpazis_Consolidation Settlement Calculation Program-The Python Code and th...Dr.Costas Sachpazis
Consolidation Settlement Calculation Program-The Python Code
By Professor Dr. Costas Sachpazis, Civil Engineer & Geologist
This program calculates the consolidation settlement for a foundation based on soil layer properties and foundation data. It allows users to input multiple soil layers and foundation characteristics to determine the total settlement.
We have designed & manufacture the Lubi Valves LBF series type of Butterfly Valves for General Utility Water applications as well as for HVAC applications.
7. Input Devices
• Trackball
– A ball device that can be rotated with the
fingers or palm of hand
• Spaceball
– Six degrees of freedom
– Does not move, detects strain placed on the ball
by trying to move it.
8. Input Devices
• Joystick
– A small, vertical lever mounted on a base
– Movable joystick measures motion
– Stationary (isometric) joystick measures strain.
• Data glove
– Used to grasp a virtual object
– Measures hand and finger position
– 2D or 3D
– Can also be used as input device to detect surface
9. Input Devices
• Digitizers
– Used for drawing, painting, or selecting positions
– Graphics tablet used to input 2D coordinates by activating a hand
cursor or stylus at given positions on a flat surface
– Used to trace contours, select precise coordinate positions
• Hand held cursor
• Stylus
– Electromagnetic
• Grid of wires
• Electromagnetic pulses send an electrical signal in stylus or
cursor
– Acoustic
• Sound waves to detect stylus position by microphones
• Can be 3D
10. Input Devices
• Image scanners
– Used to store images on a computer
– Hand held
– Flatbed
– Drum.
11. Input Devices
• Light pens
– Pen-shaped device to select screen positions by
detecting lights coming from points on the CRT
screen
– Used to capture position of an object or select
menu options.
12. Input Devices
• Voice systems
– Speech recognition systems to recognize voice
commands
– Used to activate menu options or to enter data
– Uses a dictionary from a particular user
(learning system).
13. Hard-copy Devices
• Hard-copy devices
– Plotters
• 2D moving pen with stationary paper
• 1D pen and 1D moving paper
– Printers
• Impact devices
– Inked ribbon
• Non impact devices
– Laser, ink-jet, xerographic, electrostatic, electrothermal.
14. Video Display Devices
• Cathode-ray tubes
– Raster-scan displays
– Random-scan displays
– Color CRT displays
– Direct View Storage Tubes
• Flat-panel displays
• Three-dimensional viewing devices
• Stereoscopic and virtual-reality systems
16. Cathode-Ray Tubes
(from Donald Hearn and Pauline Baker)
• Classical output device is a monitor.
• Cathode-Ray Tube (CRT)
– Invented by Karl Ferdinand Braun (1897)
19. Cathode-Ray Tubes
1. Working of CRT
– Beam of electrons directed from cathode (-)to
phosphor-coated (fluorescent) screen (anode (+))
– Directed by magnetic focusing and deflection coils
(anodes) in vacuum filled tube
– Phosphor emits photon of light, when hit by an
electron, of varied persistence (long 15-20 ms for texts /
short < 1ms for animation)
– Refresh rate (50-60 Hz / 72-76 Hz) to avoid flicker /
trail
– Phosphors are organic compounds characterized by
their persistence and their color (blue, red, green).
20. Cathode-Ray Tubes
– Horizontal deflection and vertical deflection direct the
electron beam to any point on the screen
– Intensity knob: regulates the flow of electrons by
controlling the voltage at the control grid (high voltage
reduces the electron density and thus brightness)
– Accelerating voltage from positive coating inside
screen (anode screen) or an accelerating anode
2. Image maintenance
– Charge distribution to store picture information OR
– Refresh CRT: refreshes the display constantly to
maintain phosphor glow.
21. Cathode-Ray Tubes
3. Focusing
– Focusing forces the electron beam to converge to a
point on the monitor screen
– Can be electrostatic (lens) or magnetic (field)
4. Deflection
– Deflection directs the electron beam horizontally and/or
vertically to any point on the screen
– Can be controlled by electric (deflection plates, slide 9)
or magnetic fields (deflection coils, slide 5)
– Magnetic coils: two pairs (top/bottom, left/right) of
tube neck
– Electric plates: two pairs (horizontal, vertical)
22. Cathode-Ray Tubes
Characteristics of Cathode-Ray Tube (CRT)
1. Intensity is proportional to the number of electrons
repelled in beam per second (brightness)
2. Resolution is the maximum number of points that can
be displayed without overlap; is expressed as number
of horizontal points by number of vertical points;
points are called pixels (picture elements); example:
resolution 1024 x 768 pixels. Typical resolution is
1280 x 1024 pixels.
• High-definition systems: high resolution systems.
23. Cathode-Ray Tubes
3. Persistence is defined as the time taken by the emitted
light to decay one tenth of its original intensity.
• Max persistence 1 Sec, Min Persistence 10-60 sec
• Higher persistence Low refresh rate complex images
• Lower persistence High refresh rate Animations
4. Refresh Rate (Hz) number of times screen drawn or
refreshed per second.
• Usually 60 Hz (Why)
• Depends upon persistence
5. Pixel Picture Element
• Mapping of phosphorus element to pixel
• Bit for monochrome
• Byte for 256 color levels
• 3 Bytes to produce more than 16.7 million colors
24. Cathode-Ray Tubes
Aspect ratio
– Aspect ratio is the ratio of vertical pixels to horizontal
pixels for an equal length line.
– A square plotted with same number of pixels with
different aspect ratios will look as:
Ar = 1 Ar > 1 Ar < 1
25. Cathode-Ray Tubes
– It is also defined as the ratio of the vertical dimension over
the horizontal dimension. If and resolution of 640 x 480
pixels:
Horizontal 640/8 = 80 pixels / inch
Vertical 480/6 = 80 pixels / inch
Square pixels (no distortion).
27. Raster-scan Displays
1. Introduction
– Raster-scan display is the most common type of
monitor using a CRT.
– A raster is a matrix of pixels covering the screen area
and is composed of raster lines.
– The electron beam scans the screen from top to
bottom one row at a time. Each row is called a scan
line.
– The electron beam is turned on and off to produce a
collection of dots painted one row at a time. These
will form the image.
29. Raster-scan Displays
2. Refresh Procedure
– Retracing
• Horizontal retrace – beam returns to left of screen
• Vertical retrace – bean returns to top left corner of screen
– Blanking
• Horizontal Retrace Blanking
• Vertical Retrace Blanking
– Interlacing
• display first even-numbered lines, then odd-numbered lines
• permits to see the image in half the time
• useful for slow refresh rates (30 Hz shows as 60 Hz).
30. Raster-scan Displays
– Over scanning
• Scan lines extended beyond visibility edge as there is limit on
speed of sweep generator
• Avoid cracking at borders and distortion
• Top and Bottom Vertical Over scanning
• Left and Right Horizontal Over scanning
– Refresh rate
• 24 is a minimum to avoid flicker, corresponding to 24 Hz (1
Hz = 1 refresh per second)
• Current raster-scan displays have a refresh rate of at least 60
frames (60 Hz) per second, up to 120 (120 Hz).
34. Raster-scan Displays
3.1 Frame Buffer
– Also called Refresh Buffer, contains picture definition
– The image is stored in a frame buffer containing the total
screen area and where each memory location corresponds
to a pixel.
– Consider it as 2-D memory array
– E.g. Frame buffer
• size 8x8
• Color depth 8 (values 0-7)
– Uses large memory:
640x480 307200 bits 38 kB
35. Raster-scan Displays
– Bitmap In a monochrome system, each bit is 1 or 0 for
the corresponding pixel to be on or off making frame a
bitmap.
– The display processor scans the frame buffer to turn
electron beam on/off depending if the bit is 1 or 0.
– Example Bitmap
36. Raster-scan Displays
– Pixmap for color monitors, the frame buffer also contains
the color of each pixel (color buffer) as well as other
characteristics of the image (gray scale, …).
– Depth of the buffer area is the number of bits per pixel
(bit planes), up to 24. 8 bits/pixel 0..255
– Examples: television panels, printers, PC monitors
– 8 level Pixmap
37. Raster-scan Displays
3.2 Video Controller
– Also called scan controller
– display an image onto screen
– Read the frame buffer and display on screen
Video Controller
38. Raster-scan Displays
– How each pixel value in the frame buffer is sent to the right
place on the display surface
39. Raster-scan Displays
– Basic Operation of Video Controller/Scan Controller
(from Donald Hearn and Pauline Baker)
40. Raster-scan Displays
3.3 Display Processor
– Relieves CPU from graphics chores
– It digitize the picture definition in the application
program
– It does SCAN CONVERSION
– Define Graphic objects and characters to be displayed
42. Random-scan Displays
1. Introduction
– Random scan systems are also called
• Vector Displays
• stroke-writing, or
• calligraphic displays.
– The electron beam directly draws the picture in
any specified order.
– A pen plotter is an example of such a system.
43. Random-scan Displays
– Picture is stored in a display list, refresh display
file, vector file, or display program as a set of line
drawing commands.
– Refresh rate depends upon the size of the file.
– Refreshes by scanning the list 30 to 60 times per
second.
– More suited for line-drawing applications such as
architecture and manufacturing
45. Random-scan Displays
3. Advantages:
– Good quality lines
– No need of scan conversion
– Easy animation and requires little memory
4. Disadvantages:
– Requires intelligent electron beam (processor controlled)
– Limited screen density, limited to simple, line-based images
– Limited color capability.
• Improved in the 1960’s by the Direct View Storage Tube
(DVST) from Tektronix.
46. Raster vs. Random-scan Displays
RASTER RANDOM
DISPLAY MECHANISM E-beam traces entire screen from
upper left corner to bottom right
E-beam can highlight random
positions on the screen
DRAWING UNIT Pixel Line
IMAGE STORAGE Frame Buffer Display File
IMAGE TYPES Can display very complex images
with greater accuracy
Wire Frame modeling
IMAGE QUALITY •May be Jagged due to digitization
•Diagonal Lines are produced with
lower intensity
•Smooth lines as e-beam directly
follows the line path
•Diagonal Lines are produced with
equal intensity
REFRESHING Entire Screen has to be refreshed Only selected portions are redrawn
REFRESH RATE Maximum 80 Hz Higher refresh rates.
ANIMATIONS Supported Not supporting
COLORS Higher Color Depth Lesser colors and shades
COLOR TECHNIQUE Shadow Masking Beam Penetration
48. Color CRT Monitor
1. Introduction
– Uses different phosphors, a combination of
Red, Green, and Blue, to produce any color.
– Two methods:
• Beam penetration
• Shadow Masking
49. Color CRT Monitor
2. Beam Penetration Method
– Random scan systems uses beam penetration.
– 2 layers (Red, Green) phosphors; low speed electrons
excite Red, high speed electrons excite Green.
– Intermediate speed excite both to get yellow and
orange.
– Color is controlled by electron beam voltage.
– It is inexpensive
– Only produces a restricted set of colors.
– Quality of Picture is low
50. Color CRT Monitor
3. Shadow Masking Method
– Raster scan systems uses a shadow mask with three
electron guns: Red, Green, and Blue (RGB color
model).
– Color is produced by adjusting the intensity level of
each electron beam.
– Produces a wide range of colors, from 8 to several
millions.
– The arrangement of color components can be
• Delta-Delta arrangement
• In line arrangement
52. Color CRT Monitor
R G B color
0 0 0 black
0 0 1 blue
0 1 0 green
0 1 1 cyan
1 0 0 red
1 0 1 magenta
1 1 0 yellow
1 1 1 white
53. Color CRT Monitor
– Color CRT’s are designed as RGB monitors also called
full-color system or true-color system.
– Use shadow-mask methods with intensity from each
electron gun (red, green, blue) to produce any color
directly on the screen without preprocessing.
– Frame buffer contains 24 bits per pixel, for 256 voltage
settings to adjust the intensity of each electron beam,
thus producing a choice of up to 17 million colors for
each pixel (2563).
56. DVST Displays
2. Components
– Flooding Gun to flood the entire screen and charge the
collector plate
– Writing Gun is same as e-gun in CRT having heating
filament, cathode, focusing anode and deflection yokes
– Collector Plate partly energized by the flooding gun,
has background charge to keep fired phosphorus
illuminated
– Phosphorus Screen higher persistence CRT screen
– Ground to discharge the collector to erase the screen
57. DVST Displays
3. Advantages/Disadvantages
– No Refreshing required
– It can draw complex images with higher resolution
– Does not display colors
– Selected part of the picture cannot be erased
– Animation not supported
59. Flat Panel Displays
1. Introduction
– Flat panel displays are video devices that are thinner,
lighter, and require less power than CRT’s.
– Examples: wall frames, pocket notepads, laptop
computer screens, …
2. Types of Flat Panel Displays
– Emissive panels convert electrical energy into light:
plasma panels, thin-film electroluminescent display
device, light-emitting diodes.
– Non-emissive convert light into graphics using optical
effects:
liquid-crystal device (LCD).
60. Flat Panel Displays
(from Donald Hearn and Pauline Baker)
Vertical
Cathode (-ve)
Horizontal
Anode (+ve)
2.1 Plasma-panel display:
61. Flat Panel Displays
Components of Plasma-panel displays
– Cathode Fine wires attached to glass plates deliver –ve
voltage to gas cells on vertical axis
– Fluorescent cells Small pockets of gas liquid or solids
to emit light in excited state
– Anode Fine wires attached to glass plates deliver +ve
voltage to gas cells on horizontal axis
– Glass Plates to act as capacitors to maintain sustaining
voltage
62. Flat Panel Displays
Working of Plasma-panel displays
– An array of small fluorescent gas lights
– Constructed by filling a mixture of gases (usually neon)
between two glass plates
– vertical conducting ribbons are placed in one plate, and
horizontal conducting ribbons are placed in the other
plate
– voltage is applied to the two ribbons to transform gas
into glowing plasma of electrons and ions.
63. Flat Panel Displays
– Two voltage levels
• Firing Voltage 60
• Sustaining Voltage 40
Advantages/Disadvantages:
– No need of refreshing
– Provides Fairly High resolution
– However MONOCHROME with few grey levels
65. Flat Panel Displays
Thin-film electroluminescent displays are
– similar devices except that the region between the
plates is filled with phosphor instead of gas.
Example: zinc sulfide with manganese
voltage applied between the plates moves electrons to
the manganese atoms that release photons of light.
66. Flat Panel Displays
2.3 Light-emitting diode:
– a matrix of diodes, one per pixel
– apply voltage stored in the refresh buffer
– convert voltage to produce light in the display.
67. Flat Panel Displays
2.4 Liquid-crystal displays (LCD):
– LCD screens are often used in small devices such as
calculators and laptop monitors.
– non-emissive types of displays
– the picture produced by passing light from a light
source through liquid-crystal material
– Liquid-crystal material contains crystals within a liquid
nematic (thread-like) liquid-crystals have rod shape that
can either align to with the light direction or not
when voltage is applied to conductors.
– Liquid-crystal material can be programmed to either let
the light through or not
69. Flat Panel Displays
Components of Liquid Crystal Displays
– Glass Plates contains the liquid crystal and serve as
bonding surface for conductive coating.
– Transparent Conductor To apply voltage to two
ribbons (across liquid crystals) to make plasma glow.
– Liquid Crystals Substance that polarize light when
voltage is applied.
– Two Polarized Films Transparent sheet that polarize
light.
– Reflectors
70. Flat Panel Displays
– ON STATE when light is twisted
– OFF STATE when block the light
– Passive matrix LCD have refresh buffer
screen refreshed at 60 frames per second
– Active matrix LCD transistor stored at each
pixel prevents charge from leaking out of
liquid-crystals
– Temperature dependent, sluggish
– Require little power to operate
71. 3-D Viewing Devices
• For the display of 3D scenes.
• Often using a vibrating, flexible mirror.
• Scan alternate images in alternate frames.
• Multiple stereo images (time multiplexing)
72. Stereoscopic and
Virtual-Reality Systems
• Another technique for the display of 3D scenes.
• Not true 3D images, but provides a 3D effect.
• Uses two views of a scene along the lines of right
and left eye. Gives perception of a scene depth
when right view is seen from right eye and left
scene is seen from left eye (stereoscopic effect).
Display each view at alternate refresh cycles.
73. Stereoscopic and
Virtual-Reality Systems
• Stereoscopic systems are used in virtual reality
systems:
– Augmented reality
– Immersive reality
• Headset generates stereoscopic views
• Input devices (gloves, helmet, …) capture motion
• Sensing system in headset tracks user’s position
• Scene projected on an arrangement of walls
74. Graphics Workstations
• Graphics monitors use raster-scan displays (CRT
or flat-panel monitors).
• Graphics workstations provide more powerful
graphics capability:
– Screen resolution 1280 x 1024 to 1600 x 1200.
– Screen diagonal > 18 inches.
• Specialized workstations (medical imaging,
CAM):
– Up to 2560 x 2048.
– Full-color.
• 360 degrees panel viewing systems.
77. Homogeneous Coordinates
• There are three types of co-ordinate systems
1. Cartesian Co-ordinate System
– Left Handed Cartesian Co-ordinate System( Clockwise)
– Right Handed Cartesian Co-ordinate System ( Anti Clockwise)
2. Polar Co-ordinate System
3. Homogeneous Co-ordinate System
We can always change from one co-ordinate system to
another.
78. Homogeneous Coordinates
– A point (x, y) can be re-written in homogeneous
coordinates as (xh, yh, h)
– The homogeneous parameter h is a non-zero value such
that:
– We can then write any point (x, y) as (hx, hy, h)
– We can conveniently choose h = 1 so that
(x, y) becomes (x, y, 1)
h
x
x h
h
y
y h
79. Homogeneous Coordinates
Advantages:
1. Mathematicians use homogeneous coordinates as they allow
scaling factors to be removed from equations.
2. All transformations can be represented as 3*3 matrices
making homogeneity in representation.
3. Homogeneous representation allows us to use matrix
multiplication to calculate transformations extremely
efficient!
4. Entire object transformation reduces to single matrix
multiplication operation.
5. Combined transformation are easier to built and understand.
81. Matrices
• Definition: A matrix is an n X m array of scalars, arranged
conceptually in n rows and m columns, where n and m are
positive integers. We use A, B, and C to denote matrices.
• If n = m, we say the matrix is a square matrix.
• We often refer to a matrix with the notation
A = [a(i,j)], where a(i,j) denotes the scalar in the ith row and
the jth column
• Note that the text uses the typical mathematical notation where
the i and j are subscripts. We'll use this alternative form as it is
easier to type and it is more familiar to computer scientists.
82. Matrices
• Scalar-matrix multiplication:
A = [ a(i,j)]
• Matrix-matrix addition: A and B are both n X m
C = A + B = [a(i,j) + b(i,j)]
• Matrix-matrix multiplication: A is n X r and B is r X m
r
C = AB = [c(i,j)] where c(i,j) = a(i,k) b(k,j)
k=1
83. Matrices
• Transpose: A is n X m. Its transpose, AT, is the m X n matrix
with the rows and columns reversed.
• Inverse: Assume A is a square matrix, i.e. n X n. The
identity matrix, In has 1s down the diagonal and 0s elsewhere
The inverse A-1 does not always exist. If it does, then
A-1 A = A A-1 = I
Given a matrix A and another matrix B, we can check whether
or not B is the inverse of A by computing AB and BA and
seeing that AB = BA = I
84. Matrices
– Each point P(x,y) in the homogenous matrix form is
represented as
– Recall matrix multiplication takes place:
1
3
1
3
3
3
*
*
*
*
*
*
*
*
*
x
x
x
z
i
y
h
x
g
z
f
y
e
x
d
z
c
y
b
x
a
z
y
x
i
h
g
f
e
d
c
b
a
1
3
1 x
y
x
85. Matrices
• Matrix multiplication does NOT commute:
– (unless one or the other is a uniform scale)
• Matrix composition works right-to-left.
– Compose:
– Then apply it to a column matrix v:
– It first applies C to v, then applies B to the result, then applies A to the result of
that.
M N N M
v M v
v A B C
v
v A B C v
M A B C
87. Transformations
– A transformation is a function that maps a point (or vector)
into another point (or vector).
– An affine transformation is a transformation that maps lines
to lines.
– Why are affine transformations "nice"?
• We can define a polygon using only points and the line segments
joining the points.
• To move the polygon, if we use affine transformations, we only must
map the points defining the polygon as the edges will be mapped to
edges!
– We can model many objects with polygons---and should---
for the above reason in many cases.
88. Transformations
– Any affine transformation can be obtained by applying, in
sequence, transformations of the form
• Translate
• Scale
• Rotate
• Reflection
– So, to move an object all we have to do is determine the
sequence of transformations we want using the 4 types of
affine transformations above.
89. Transformations
– Geometric Transformations: In Geometric transformation
an object itself is moved relative to a stationary coordinate
system or background. The mathematical statement of this
view point is described by geometric transformation applied
to each point of the object.
– Coordinate Transformation: The object is held stationary
while coordinate system is moved relative to the object. These
can easily be described in terms of the opposite operation
performed by Geometric transformation.
90. Transformations
– What does the transformation do?
– What matrix can be used to transform the original
points to the new points?
– Recall--- moving an object is the same as changing a
frame so we know we need a 3 X 3 matrix
– It is important to remember the form of these
matrices!!!
92. Geometric Transformations
– In Geometric transformation an object itself is moved relative
to a stationary coordinate system or background. The
mathematical statement of this view point is described by
geometric transformation applied to each point of the object.
Various Geometric Transformations are:
• Translation
• Scaling
• Rotation
• Reflection
• Shearing
94. Geometric Translation
• Is defined as the displacement of any object by a given
distance and direction from its original position. In simple
words it moves an object from one position to another.
x’ = x + tx y’ = y + ty
Note: House shifts position relative to origin
y
x
0
1
1
2
2
3 4 5 6 7 8 9 10
3
4
5
6
V = txI+tyJ
96. Geometric Translation
– To make operations easier, 2-D points are written as
homogenous coordinate column vectors
– The translation of a point P(x,y) by (tx, ty) can be written
in matrix form as:
1
1
1
0
0
1
0
0
1
)
(
y
x
P
y
x
P
ty
tx
T
J
ty
I
tx
v
where
P
T
P
v
v
97. Geometric Translation
– Representing the point as a homogeneous column vector
we perform the calculation as:
ty
y
y
tx
x
x
comparing
on
ty
y
tx
x
y
x
ty
y
x
tx
y
x
y
x
ty
tx
y
x
1
1
*
1
*
0
*
0
1
*
*
1
*
0
1
*
*
0
*
1
1
1
0
0
1
0
0
1
1
99. Geometric Scaling
• Scaling is the process of expanding or compressing the
dimensions of an object determined by the scaling factor.
• Scalar multiplies all coordinates
x’ = Sx × x y’ = Sy × y
• WATCH OUT:
– Objects grow and move!
Note: House shifts position relative to origin
y
x
0
1
1
2
2
3 4 5 6 7 8 9 10
3
4
5
6
1
2
1
3
3
6
3
9
100. Geometric Scaling
– The scaling of a point P(x,y) by scaling factors Sx and Sy
about origin can be written in matrix form as:
1
1
1
0
0
0
0
0
0
1
1
1
1
0
0
0
0
0
0
)
(
,
,
y
s
x
s
y
x
s
s
y
x
that
such
y
x
P
y
x
P
s
s
S
where
P
S
P
y
x
y
x
y
x
sy
sx
sy
sx
103. Geometric Rotation
– The rotation of a point P (x,y) about origin, by specified
angle θ (>0 counter clockwise) can be obtained as
x’ = x × cosθ – y × sinθ
y’ = x × sinθ + y × cosθ
– To rotate an object we have to rotate all coordinates
6
y
x
0
1
1
2
2
3 4 5 6 7 8 9 10
3
4
5
6
x
y
(x,y)
•
•
(x',y')
Let us derive these equations
107. Geometric Reflection
– Mirror reflection is obtained about X-axis
x’ = x
y’ = – y
– Mirror reflection is obtained about Y-axis
x’ = – x
y’ = y
y
x
0
1
1
2
2
3 4 5 6 7 8 9 10
3
4
5
6
-9 -8 -7 -6 -5 -4 -3 -2 -1
-10
108. Geometric Reflection
– The reflection of a point P(x,y) about X-axis can be written
in matrix form as:
1
1
1
0
0
0
1
0
0
0
1
1
1
1
1
0
0
0
1
0
0
0
1
)
(
y
x
y
x
y
x
that
such
y
x
P
y
x
P
M
where
P
M
P
x
x
109. Geometric Reflection
– The reflection of a point P(x,y) about Y-axis can be written
in matrix form as:
1
1
1
0
0
0
1
0
0
0
1
1
1
1
1
0
0
0
1
0
0
0
1
)
(
y
x
y
x
y
x
that
such
y
x
P
y
x
P
M
where
P
M
P
y
y
110. Geometric Reflection
– The reflection of a point P(x,y) about origin can be written
in matrix form as:
1
1
1
0
0
0
1
0
0
0
1
1
1
1
1
0
0
0
1
0
0
0
1
)
(
y
x
y
x
y
x
that
such
y
x
P
y
x
P
M
where
P
M
P
xy
xy
112. Geometric Shearing
– It us defined as tilting in a given direction
– Shearing about y-axis
x’ = x + ay
y’ = y + bx
y
x
0
1
1
2 3 4 5
2
3
(1,1)
y
x
0
1
1
2 3 4 5
2
3
a = 2
b = 3
4 ( 3,4)
( 2,1)
( 1,3)
113. Geometric Shearing
– The shearing of a point P(x,y) in general can be written in
matrix form as:
1
1
1
0
0
0
1
0
1
1
1
1
1
0
0
0
1
0
1
)
(
,
,
bx
y
ay
x
y
x
b
a
y
x
that
such
y
x
P
y
x
P
b
a
Sh
where
P
Sh
P
b
a
b
a
114. Geometric Shearing
– If b = 0 becomes Shearing about X-axis
x’ = x + ay
y’ = y
y
x
0
1
1
2 3 4 5
2
3
(1,1)
y
x
0
1
1
2 3 4 5
2
3 a = 2
( 2,1)
( 3,1)
115. Geometric Shearing
– The shearing of a point P(x,y) about X-axis can be written
in matrix form as:
1
1
1
0
0
0
1
0
0
1
1
1
1
1
0
0
0
1
0
0
1
)
(
0
,
0
,
y
ay
x
y
x
a
y
x
that
such
y
x
P
y
x
P
a
Sh
where
P
Sh
P
a
a
116. Geometric Shearing
– If a = 0 it becomes Shearing about y-axis
x’ = x
y’ = y + bx
y
x
0
1
1
2 3 4 5
2
3
(1,1)
y
x
0
1
1
2 3 4 5
2
3 b = 3
4
( 1,3)
( 1,4)
117. Geometric Shearing
– The shearing of a point P(x,y) about Y-axis can be written
in matrix form as:
1
1
1
0
0
0
1
0
0
1
1
1
1
1
0
0
0
1
0
0
1
)
(
,
0
,
0
bx
y
x
y
x
b
y
x
that
such
y
x
P
y
x
P
b
Sh
where
P
Sh
P
b
b
119. Inverse Transformations
– Inverse Translation: Displacement in direction of –V
– Inverse Scaling: Division by Sx and Sy
1
0
0
1
0
0
1
1
ty
tx
T
T v
v
1
0
0
0
1
0
0
0
1
/
1
,
/
1
1
, y
x
sy
sx
sy
sx S
S
S
S
120. Inverse Transformations
– Inverse Rotation: Rotation by an angle of –
– Inverse Reflection: Reflect once again
1
0
0
0
1
0
0
0
1
1
x
x M
M
1
0
0
0
cos
sin
0
sin
cos
1
R
R
122. Transformations
– Geometric Transformations: In Geometric transformation
an object itself is moved relative to a stationary coordinate
system or background. The mathematical statement of this
view point is described by geometric transformation applied
to each point of the object.
– Coordinate Transformation: The object is held stationary
while coordinate system is moved relative to the object. These
can easily be described in terms of the opposite operation
performed by Geometric transformation.
123. Coordinate Transformations
– Coordinate Translation: Displacement of the coordinate
system origin in direction of –V
– Coordinate Scaling: Scaling an object by Sx and Sy or
reducing the scale of coordinate system.
1
0
0
1
0
0
1
ty
tx
T
T v
v
1
0
0
0
1
0
0
0
1
/
1
,
/
1
, y
x
sy
sx
sy
sx S
S
S
S
124. Coordinate Transformations
– Coordinate Rotation: Rotating Coordinate system by an
angle of –
– Coordinate Reflection: Same as Geometric Reflection
(why?)
1
0
0
0
1
0
0
0
1
x
x M
M
1
0
0
0
cos
sin
0
sin
cos
R
R
126. Composite Transformations
– A number of transformations can be combined into one matrix to
make things easy
• Allowed by the fact that we use homogenous coordinates
– Matrix composition works right-to-left.
Compose:
Then apply it to a point:
It first applies C to v, then applies B to the result, then applies A to the result of that.
M A B C
v M v
v A B C
v
v A B C v
127. Composite Transformations
• Matrix multiplication does NOT commute:
– (unless one or the other is a uniform scale)
– Try this: rotate 90 degrees about x then 90 degrees about y, versus
rotate 90 degrees about y then 90 degrees about x.
M N N M
128. Composite Transformations
Rotation about Arbitrary Point (h,k)
– Imagine rotating a polygon around a point (h,k) other than
the origin
• Transform to centre point to origin
• Rotate around origin
• Transform back to centre point
130. Composite Transformations
– The three transformation matrices are combined as follows
1
1
0
0
1
0
0
1
1
0
0
0
cos
sin
0
sin
cos
1
0
0
1
0
0
1
y
x
k
h
k
h
REMEMBER: Matrix multiplication is not
commutative so order matters
)
(
.
' )
,
(
)
,
( P
T
R
T
P k
h
k
h
131. Composite Transformations
– The composite Transformation is
1
0
0
cos
sin
cos
sin
sin
cos
sin
cos
1
0
0
1
0
0
1
1
0
0
0
cos
sin
0
sin
cos
1
0
0
1
0
0
1
)
,
(
,
k
k
h
h
k
h
k
h
k
h
R k
h
132. Exercises 1
x
y
0 1
1
2
2
3 4 5 6 7 8 9 10
3
4
5
6
(2, 3)
(3, 2)
(1, 2)
(2, 1)
Translate the shape below by (7, 2)
133. Exercises 2
x
y
0 1
1
2
2
3 4 5 6 7 8 9 10
3
4
5
6
(2, 3)
(3, 2)
(1, 2)
(2, 1)
Scale the shape below by 3 in x and 2 in y
134. Exercises 3
Rotate the shape below by 30° about the origin
x
y
0 1
1
2
2
3 4 5 6 7 8 9 10
3
4
5
6
(7, 3)
(8, 2)
(6, 2)
(7, 1)
136. Exercises 5
Using matrix multiplication calculate the rotation of the
shape below by 45° about its centre (5, 3)
x
y
0 1
1
2
2
3 4 5 6 7 8 9 10
3
4
5
(5, 4)
(6, 3)
(4, 3)
(5, 2)
139. Exercise 8
Describe transformation ML which reflects an object about a
Line L: y=m*x+b.
1
0
0
1
2
1
1
1
2
1
2
1
2
1
1
2
2
2
2
2
2
2
2
m
b
m
m
m
m
m
bm
m
m
m
m
ML
140. Exercise 9
Reflect the diamond shaped polygon whose vertices are A(-1,0)
B(0,-2) C(1,0) and D(0,2) about
1. Horizontal Line y=2
2.Vertical Line x = 2
3. Line L: y=x+2.
1
0
0
4
1
0
0
0
1
2
y
M
1
0
0
0
1
0
4
0
1
2
x
M
1
0
0
2
0
1
2
1
0
2
x
y
M
142. Exercise 11
Prove that
1. Two successive translations are additive /commutative.
2. Two successive rotations are additive /commutative.
3. Two successive Scaling are multiplicative /commutative.
4. Two successive reflections are nullified /Invertible.
Is Translation followed by Rotation equal to Rotation followed
by translation ?
146. 1. Definitions: It is defined as a process for displaying views
of a two-dimensional picture on an output device:
– Specify which parts of the object to display (clipping window, or world
window, or viewing window)
– Where on the screen to display these parts (view port).
2D Viewing Transformation
147. – World Co – Ordinate System is a right handed Cartesian
coordinate system in which picture is actually defined.
– Physical Device Co – Ordinate System is a coordinate
system that correspond to output device or work stations
where image to be displayed. E.g. with our monitor it is a
Left handed Cartesian coordinate system.
– Normalized Co – Ordinate System It is a right handed
coordinate system in which display area of virtual display
device correspond to the unit square (1x1).
2D Viewing Transformation
148. – Window or Clipping window is the selected section of a
scene that is displayed on a display window. It is a finite
region from World Coordinate System.
– View port is the window where the object is viewed on the
output device. It is a finite region from Device Coordinate
System.
2D Viewing Transformation
150. 2. 2D viewing pipeline
– Construct world-coordinate scene using modeling-
coordinate transformations
– Convert world-coordinates to viewing coordinates
– Transform viewing-coordinates to normalized-coordinates
(ex: between 0 and 1, or between -1 and 1)
– Map normalized-coordinates to device-coordinates.
CONSTRUCT
WORLD
COORDINATE
SCENE
CONVERT
WORLD CORD
TO
VIEW CORD
MAP
VIEW CORD
TO
NORM VIEW CO
MAP NORM
VIEW CORD
TO
DEVICE CORD
MC WC VC NC DC
2D Viewing Transformation
152. 3. Approaches
– Two main approaches to 2D viewing Transformation
are
1. Direct Approach
2. Normalized Approach
2D Viewing Transformation
153. 3.1. Direct Approach
– Mapping the clipping window into a view port which
may be normalized
– Its involves
• translating the origin of the clipping window to that of the view
port
• scaling the clipping window to the size of the view port
– We can drive it by three methods
2D Viewing Transformation
155. 2D Viewing Transformation
Method 1: Let P (xw,yw) be any point in the window, which is
to be mapped to P’ (xv,yv) in the associated view port. The
two steps required are:
1. Translation Tv that takes (xwmin,ywmin) to (xvmin,yvmin) , where
v = (xvmin – xwmin)I + (yvmin – ywmin)J
2. Scaling by following scaling factors about point (xvmin,yvmin)
min
max
min
max
min
max
min
max
yw
yw
yv
yv
s
and
xw
xw
xv
xv
s y
x
157. 2D Viewing Transformation
Method 2: We get the same result if we perform
1
0
0
*
0
*
0
1
0
0
1
0
0
1
1
0
0
0
0
0
0
1
0
0
1
0
0
1
.
min
min
min
min
min
min
min
min
)
,
(
,
)
,
( min
min
min
min
yv
yw
s
s
xv
xw
s
s
yw
xw
s
s
yv
xv
T
S
T
V
y
y
x
x
y
x
yw
xw
sy
sx
yv
xv
158. 2D Viewing Transformation
Method 3: Let P (xw,yw) be any point in the window, which is
to be mapped to P’ (xv,yv) in the associated view port. To
maintain the same relative placement in the view port as in
window, we require that
min
max
min
min
max
min
min
max
min
min
max
min
yw
yw
yw
yw
yv
yv
yv
yv
and
xw
xw
xw
xw
xv
xv
xv
xv
159. 2D Viewing Transformation
Which means
min
max
min
max
min
min
min
max
min
max
min
min
)
(
*
)
(
)
(
*
)
(
yw
yw
yv
yv
yw
yw
yv
yv
and
xw
xw
xv
xv
xw
xw
xv
xv
Put these equations in the matrix form.
160. 2D Viewing Transformation
3. 2 Normalized Approach
– Mapping the clipping window into a normalized square
– It involves
• transforming the clipping window into a normalized square
• Then clipping in normalized coordinates (ex: -1 to 1) or (ex: 0 to 1)
• Then transferring the scene description to a view port specified in
screen coordinates.
• Finally positioning the view port area in the display window.
– Thus V = W.N, where N is a transformation that maps
window to normalized view port and W maps Normalized
points to view port.
164. 4. Aspect Ratio
– Since viewing involves scaling, so undesirable
distortions may be introduced when sx ≠ sy
– Aspect ratio is defined as (ymax-ymin)/ (xmax-xmin)
– If aw= av => no distortion
– If aw> av => Horizontal spanning
– If aw< av => Vertical spanning
2D Viewing Transformation
Window
(aw = 1)
Centered
Sub view
(av=3/4)
165. Exercise 1
Find the normalization transformation that maps a window
defined by (0,0) to (3,5) on to
1. The view port that is entire normalized device.
2. Has lower left corner (0,0) and upper right corner as
(1/2,1/2)
1
0
0
4
1
4
1
0
2
1
0
2
1
V
1
0
0
8
1
8
1
0
4
1
0
4
1
V
166. Exercise 2
Find the complete viewing transformation that
1. First maps a window defined by (1,1) to (10,10) on to a view port
of size(1/4,0) to (3/4,1/2) in normalized device space
2. Then maps a window of (1/4,1/4) to (1/2,1/2) in normalized device
space to view port of (0,0) to (10,10)
1
0
0
18
1
18
1
0
36
7
0
18
1
N
1
0
0
1
2
0
1
0
2
.N
W
V
1
0
0
8
36
0
8
0
36
W
167. Exercise 3
Find the normalization transformation that maps a window
defined by (0,0) to (4,3) on to normalized device screen
keeping aspect ratio preserved.
1
0
0
0
4
1
0
0
0
4
1
N
Sol: aw = ¾
av = 1
In normalized device we will keep
x extent 0 to 1 and y extent 0 to ¾
Sx = (1-0)/(4-0) = ¼
Sy=( ¾ - 0)/(3-0) = ¼
170. 2D Clipping
1. Introduction:
A scene is made up of a collection of objects specified in world
coordinates
World Coordinates
171. 2D Clipping
When we display a scene only those objects within a particular
window are displayed
wymax
wymin
wxmin wxmax
Window
World Coordinates
172. 2D Clipping
Because drawing things to a display takes time we clip
everything outside the window
wymax
wymin
wxmin wxmax
World Coordinates
Window
173. 1.1 Definitions:
– Clipping is the process of determining which elements of
the picture lie inside the window and are visible.
– Shielding is the reverse operation of clipping where
window act as the block used to abstract the view.
– By default, the “clip window” is the entire canvas
• not necessary to draw outside the canvas
• for some devices, it is damaging (plotters)
– Sometimes it is convenient to restrict the “clip window” to
a smaller portion of the canvas
• partial canvas redraw for menus, dialog boxes, other obscuration
2D Clipping
174. 2D Clipping
1.2 Example:
For the image below consider which lines and points should be kept
and which ones should be clipped against the clipping window
wymax
wymin
wxmin wxmax
Window
P1
P2
P3
P6
P5
P7
P10
P9
P4
P8
175. 1.3 Applications:
– Extract part of a defined scene for viewing.
– Drawing operations such as erase, copy, move etc.
– Displaying multi view windows.
– Creating objects using solid modeling techniques.
– Anti-aliasing line segments or object boundaries.
– Identify visible surfaces in 3D views.
2D Clipping
176. 2D Clipping
1.4 Types of clipping:
– Three types of clipping techniques are used depending
upon when the clipping operation is performed
a. Analytical clipping
– Clip it before you scan convert it
– used mostly for lines, rectangles, and polygons, where
clipping algorithms are simple and efficient
177. 2D Clipping
b. Scissoring
– Clip it during scan conversion
– a brute force technique
• scan convert the primitive, only write pixels if inside the clipping
region
• easy for thick and filled primitives as part of scan line fill
• if primitive is not much larger than clip region, most pixels will fall
inside
• can be more efficient than analytical clipping.
178. 2D Clipping
c. Raster Clipping
– Clip it after scan conversion
– render everything onto a temporary canvas and copy the
clipping region
• wasteful, but simple and easy,
• often used for text
179. 2D Clipping
Foley and van Dam suggest the following:
– for floating point graphics libraries, try to use analytical
clipping
– for integer graphics libraries
• analytical clipping for lines and polygons
• others, do during scan conversion
– sometimes both analytical and raster clipping performed
180. 1.5 Levels of clipping:
– Point Clipping
– Line Clipping
– Polygon Clipping
– Area Clipping
– Text Clipping
– Curve Clipping
2D Clipping
182. Point Clipping
– Simple and Easy
– a point (x,y) is not clipped if:
wxmin ≤ x ≤ wxmax
&
wymin ≤ y ≤ wymax
– otherwise it is clipped
wymax
wymin
wxmin wxmax
Window
P1
P2
P5
P7
P10
P9
P4
P8
Clipped
Points Within the Window
are Not Clipped
Clipped
Clipped
Clipped
184. Line Clipping
– It is Harder than point
clipping
– We first examine the end-
points of each line to see
if they are in the window
or not
• Both endpoints inside, line
trivially accepted
• One in and one out, line is
partially inside
• Both outside, might be
partially inside
• What about trivial cases?
ymin
ymax
xmin xmax
185. Line Clipping
Situation Solution Example
Both end-points inside
the window
Don’t clip
One end-point inside the
window, one outside
Must clip
Both end-points outside
the window
Don’t know!
186. 2D Line Clipping Algorithms
1. Analytical Line Clipping
2. Cohen Sutherland Line Clipping
3. Liang Barsky Line Clipping
187. Analytical Line Clipping
Also called Brute force line clipping can be performed
as follows:
1. Don’t clip lines with both
end-points within the
window
188. Analytical Line Clipping
2. For lines with one end-point inside the window and one
end-point outside, calculate the intersection point (using
the equation of the line) and clip from this point out
– Use parametric equations
x=x0+t(x1-x0)
y=y0+t(y1-y0)
– Intersection if 0 ≤t ≤ 1
189. Analytical Line Clipping
3. For lines with both end-points
outside the window test the
line for intersection with all of
the window boundaries, and
clip appropriately
+ Very Simple method
– However, calculating line intersections is computationally
expensive
– Because a scene can contain so many lines, the brute force
approach to clipping is much too slow
190. 2D Line Clipping Algorithms
1. Analytical Line Clipping
2. Cohen Sutherland Line Clipping
3. Liang Barsky Line Clipping
191. Cohen-Sutherland Line Clipping
– An efficient line clipping
algorithm
– The key advantage of the
algorithm is that it vastly
reduces the number of line
intersections that must be
calculated.
Dr. Ivan E. Sutherland co-
developed the Cohen-
S u t h e r l a n d c l i p p i n g
algorithm. Sutherland is a
graphics giant and includes
amongst his achievements
the invention of the head
m o u n t e d d i s p l a y .
Cohen is something of a mystery – can
a n y b o d y f i n d o u t w h o h e w a s?
192. Cohen-Sutherland Line Clipping
– Two phases Algorithm
Phase I: Identification Phase
All line segments fall into one of the following categories
1. Visible: Both endpoints lies inside
2. Invisible: Line completely lies outside
3. Clipping Candidate: A line neither in category 1 or 2
Phase II: Perform Clipping
Compute intersection for all lines that are candidate for
clipping.
193. Cohen-Sutherland Line Clipping
Phase I: Identification Phase: World space is divided into regions
based on the window boundaries
– Each region has a unique four bit region code
– Region codes indicate the position of the regions with respect to the
window
1001 1000 1010
0001
0000
Window
0010
0101 0100 0110
above below right left
3 2 1 0
Region Code Legend
194. Cohen-Sutherland Line Clipping
Every end-point is labelled with the appropriate region code
wymax
wymin
wxmin wxmax
Window
P3 [0001]
P6 [0000]
P5 [0000]
P7 [0001]
P10 [0100]
P9 [0000]
P4 [1000]
P8 [0010]
P12 [0010]
P11 [1010]
P13 [0101] P14 [0110]
195. Cohen-Sutherland Line Clipping
Visible Lines: Lines completely contained within the window
boundaries have region code [0000] for both end-points so are
not clipped
wymax
wymin
wxmin wxmax
Window
P3 [0001]
P6 [0000]
P5 [0000]
P7 [0001]
P10 [0100]
P9 [0000]
P4 [1000]
P8 [0010]
P12 [0010]
P11 [1010]
P13 [0101] P14 [0110]
196. Cohen-Sutherland Line Clipping
Invisible Lines: Any line with a common set bit in the region
codes of both end-points can be clipped completely
– The AND operation can efficiently check this
– Non Zero means Invisible
wymax
wymin
wxmin wxmax
Window
P3 [0001]
P6 [0000]
P5 [0000]
P7 [0001]
P10 [0100]
P9 [0000]
P4 [1000]
P8 [0010]
P12 [0010]
P11 [1010]
P13 [0101] P14 [0110]
197. Cohen-Sutherland Line Clipping
Clipping Candidates: Lines that cannot be identified as
completely inside or outside the window may or may not cross
the window interior. These lines are processed in Phase II.
– If AND operation result in 0 the line is candidate for clipping
wymax
wymin
wxmin wxmax
Window
P3 [0001]
P6 [0000]
P5 [0000]
P7 [0001]
P10 [0100]
P9 [0000]
P4 [1000]
P8 [0010]
P12 [0010]
P11 [1010]
P13 [0101] P14 [0110]
198. Cohen-Sutherland Clipping Algorithm
Phase II: Clipping Phase: Lines that are in category 3 are now
processed as follows:
– Compare an end-point outside the window to a boundary
(choose any order in which to consider boundaries e.g. left,
right, bottom, top) and determine how much can be
discarded
– If the remainder of the line is entirely inside or outside the
window, retain it or clip it respectively
– Otherwise, compare the remainder of the line against the
other window boundaries
– Continue until the line is either discarded or a segment
inside the window is found
199. Cohen-Sutherland Line Clipping
• Intersection points with the window boundaries are calculated
using the line-equation parameters
– Consider a line with the end-points (x1, y1) and (x2, y2)
– The y-coordinate of an intersection with a vertical window
boundary can be calculated using:
y = y1 + m (xboundary - x1)
where xboundary can be set to either wxmin or wxmax
– The x-coordinate of an intersection with a horizontal
window boundary can be calculated using:
x = x1 + (yboundary - y1) / m
where yboundary can be set to either wymin or wymax
200. Cohen-Sutherland Line Clipping
• We can use the region codes to determine which window
boundaries should be considered for intersection
– To check if a line crosses a particular boundary we
compare the appropriate bits in the region codes of its end-
points
– If one of these is a 1 and the other is a 0 then the line
crosses the boundary.
201. Cohen-Sutherland Line Clipping
Example1: Consider the line P9 to P10 below
– Start at P10
– From the region codes
of the two end-points we
know the line doesn’t
cross the left or right
boundary
– Calculate the intersection
of the line with the bottom boundary
to generate point P10’
– The line P9 to P10’ is completely inside the window so is
retained
wymax
wymin
wxmin wxmax
Window
P10 [0100]
P9 [0000]
P10’ [0000]
P9 [0000]
202. Cohen-Sutherland Line Clipping
Example 2: Consider the line P3 to P4 below
– Start at P4
– From the region codes
of the two end-points
we know the line
crosses the left
boundary so calculate
the intersection point to
generate P4’
– The line P3 to P4’ is completely outside the window so is
clipped
wymax
wymin
wxmin wxmax
Window
P4’ [1001]
P3 [0001]
P4 [1000]
P3 [0001]
203. Cohen-Sutherland Line Clipping
Example 3: Consider the line P7 to P8 below
– Start at P7
– From the two region
codes of the two
end-points we know
the line crosses the
left boundary so
calculate the
intersection point to
generate P7’
wymax
wymin
wxmin wxmax
Window
P7’ [0000]
P7 [0001] P8 [0010]
P8’ [0000]
204. Cohen-Sutherland Line Clipping
Example 4: Consider the line P7’ to P8
– Start at P8
– Calculate the
intersection with the
right boundary to
generate P8’
– P7’ to P8’ is inside
the window so is
retained
wymax
wymin
wxmin wxmax
Window
P7’ [0000]
P7 [0001] P8 [0010]
P8’ [0000]
205. Cohen-Sutherland Line Clipping
Mid-Point Subdivision Method
– Algorithm
1. Initialise the list of lines to all lines
2. Classify lines as in Phase I
3. Remove all lines from the list in category 1 or 2;
4. Divide all lines of category 3 are into two smaller segments at
mid-point (xm,ym) where xm = (x1 +x2)/2 and ym = (y1 +y2)/2
5. Remove the original line from list and enter its two newly created
segments.
6. Repeat step 2-5 until list is null.
207. Cohen-Sutherland Line Clipping
Mid-Point Subdivision Method
– Integer Version
– Fast as Division by 2 can be performed by simple shift
right operation
– For NxN max dimension of line number of subdivisions
required log2 N.
– Thus a 1024x1024 raster display require just 10
subdivisions………
208. 2D Line Clipping Algorithms
1. Analytical Line Clipping
2. Cohen Sutherland Line Clipping
3. Liang Barsky Line Clipping
209. Liang-Barsky Line Clipping
Introduction:
– Cohen-Sutherland sometimes performs a lot of fruitless
clipping due to external intersections, but oldest, widely
published, most common
– Cyrus-Beck (1978) and Liang-Barsky (1984) are more
efficient
– C-B is a parametric line-clipping algorithm
– L-B is based on C-B algorithm. It adds efficient trivial
rejection tests
• can be used for 2D line clipping against arbitrary convex polygons
210. Liang-Barsky Line Clipping
– Using parametric equations, compute line segment
intersections (actually, just values of u) with clipping
region edges
– Determine if the four values of u actually correspond to
real intersections
– Then calculate x and y values of the intersections
– L-B examines values of u for earlier reject
211. Liang-Barsky Line Clipping
– Parametric definition of a line:
• x = x1 + uΔx
• y = y1 + uΔy
• Δx = (x2-x1), Δy = (y2-y1), 0<=u<=1
– Goal: find range of u for which x and y both inside the
viewing window
213. Liang-Barsky Line Clipping
– Rules:
1) pk = 0: the line is parallel to boundaries
– If for that same k, qk < 0, it’s outside
– Otherwise it’s inside
2) pk < 0: the line starts outside this boundary
– rk = qk/pk
– u1 = max(0, rk, u1)
3) pk > 0: the line starts inside the boundary
– rk = qk/pk
– u2 = min(1, rk, u2)
4) If u1 > u2, the line is completely outside
214. Liang-Barsky Line Clipping
1. A line parallel to a clipping window edge has pk = 0 for that
boundary.
2. If for that k, qk < 0, the line is completely outside and can be
eliminated.
3. When pk < 0 the line proceeds outside to inside the clip window and
when pk > 0, the line proceeds inside to outside.
4. For nonzero pk, u = qk / pk gives the intersection point.
5. For each line, calculate u1 and u2. For u1, look at boundaries for
which pk < 0 (outside→ in). Take u1 to be the largest among (0, qk /
pk). For u2, look at boundaries for which pk k > 0 (inside → out).
Take u2 to be the minimum of (1, qk / pk). If u1 > u2, the line is
outside and therefore rejected.
215. Liang-Barsky Line Clipping
– Faster than Cohen-Sutherland, does not need to iterate
– can be used for 2D line clipping against arbitrary convex
polygons
– Also extends to 3D
• Add z = z1 + uΔz
• Add 2 more p’s and q’s
• Still only 2 u’s
219. Polygon Clipping
• Some difficulties:
– Maintaining correct inside/outside
– Variable number of vertices
– Handle screen corners
correctly
220. Sutherland-Hodgman Area Clipping
• A technique for clipping areas
developed by Sutherland &
Hodgman
• Put simply the polygon is clipped
by comparing it against each
boundary in turn
Original Area Clip Left Clip Right Clip Top Clip Bottom
Sutherland
turns up
again. This
time with
Gary Hodgman with
whom he worked at
the first ever graphics
company Evans &
Sutherland
221. 1. Basic Concept:
• Simplify via separation
• Clip whole polygon against one edge
– Repeat with output for other 3 edges
– Similar for 3D
• You can create intermediate vertices that get thrown
out
Sutherland-Hodgeman Polygon Clipping
222. Sutherland-Hodgeman Polygon Clipping
• Example
Start Left Right Bottom Top
Note that the point one of the points added when clipping
on the right gets removed when we clip with bottom
223. 2. Algorithm:
Let (P1, P2,…. PN) be the vertex list of the Polygon to be
clipped and E be the edge of +vely oriented, convex clipping
window.
We clip each edge of the polygon in turn against each window
edge E, forming a new polygon whose vertices are determined
as follows:
Sutherland-Hodgeman Polygon Clipping
224. Sutherland-Hodgeman Polygon Clipping
Creating New Vertex List
in out
save new clip vert
Leaving
out out
save nothing
Outside
Pi-1
Pi-1
Pi
Pi
out in
save new clip vert
and ending vert
Entering
Pi
Pi-1
in in
save ending vert
Inside
Pi-1
Pi
225. Four cases
1. Inside: If both Pi-1 and Pi are to the left of window edge
vertex then Pi is placed on the output vertex list.
2. Entering: If Pi-1 is to the right of window edge and Pi is to
the left of window edge vertex then intersection (I) of Pi-1 Pi
with edge E and Pi are placed on the output vertex list.
3. Leaving: If Pi-1 is to the left of window edge and Pi is to the
right of window edge vertex then only intersection (I) of Pi-1
Pi with edge E is placed on the output vertex list.
4. Outside: If both Pi-1 and Pi are to the right of window edge
nothing is placed on the output vertex list.
Sutherland-Hodgeman Polygon Clipping
226. START
INPUT VERTEX LIST
(P1, P2........, PN)
IF PiPi-1
INTERSECT
E ?
FOR i =1 TO (N-1) DO
COMPUTE I
OUTPUT I IN VERTEX LIST
IF Pi TO
LEFT OF E ?
YES NO
YES
OUTPUT Pi IN VERTEX
LIST
NO
i = i+1
Flow Chart
Special case for
first Vertex
227. Flow Chart
Special case for
first Vertex
IF PNP0
INTERSECT
E ?
COMPUTE I
OUTPUT I IN VERTEX
LIST
YES NO
END
YOU CAN ALSO APPEND AN ADDITIONAL VERTEX
PN+1 = P1 AND AVOID SPECIAL CASE FOR FIRST
VERTEX
228. Sutherland-Hodgeman Polygon Clipping
Inside/Outside Test:
Let P(x,y) be the polygon vertex which is to be tested against
edge E defined form A(x1, y1) to B(x2, y2). Point P is to be said
to the left (inside) of E or AB iff
or C = (x2 – x1) (y – y1) – (y2 – y1)(x – x1) > 0
otherwise it is said to be the right/Outside of edge E
0
1
2
1
1
2
1
x
x
x
x
y
y
y
y
229. Other Area Clipping Concerns
• Clipping concave areas can be a little more tricky as often
superfluous lines must be removed
• Clipping curves requires more work
– For circles we must find the two intersection points on the window
boundary
Window Window
Window Window
231. Text Clipping
Text clipping relies on the concept of bounding rectangle
TYPES
– All or None String Clipping: The bounding rectangle is on the
word.
– All or None Character Clipping: The bounding rectangle is on
the character.
– Character Clipping
STRING
S T R I N G
232. Text Clipping
METHODS
– Point Clipping in case of Bit Mapped Fonts
– Curve, Line or Polygon clipping in case of Outlined fonts
235. Abstract
• Hidden-surface elimination methods
• Identifying visible parts of a scene from a viewpoint
• Numerous algorithms
– More memory - storage
– More processing time – execution time
– Only for special types of objects - constraints
• Deciding a method for a particular application
– Complexity of the scene
– Type of objects
– Available equipment
– Static or animated scene
244
<Ex. Wireframe Displays>
237. Classification of Visible-Surface Detection
Algorithms
• Object-space methods vs. Image-space methods
– Object definition directly vs. their projected images
– Most visible-surface algorithms use image-space methods
– Object-space can be used effectively in some cases
• Ex) Line-display algorithms
• Object-space methods
– Compares objects and parts of objects to each other
• Image-space methods
– Point by point at each pixel position on the projection plane
246
238. Sorting and Coherence
Methods
• To improve performance
• Sorting
– Facilitate depth comparisons
• Ordering the surfaces according to their distance
from the viewplane
• Coherence
– Take advantage of regularity
• Epipolar geometry
• Topological coherence
247
240. Inside-outside test
• A point (x, y, z) is “inside” a surface with plane
parameters A, B, C, and D if
• The polygon is a back face if
– V is a vector in the viewing direction from the eye(camera)
– N is the normal vector to a polygon surface
249
0
D
Cz
By
Ax
0
N
V V
N = (A, B, C)
241. Advanced Configuration
• In the case of concave polyhedron
– Need more tests
• Determine faces totally or partly obscured by other faces
– In general, back-face removal can be expected to eliminate
about half of the surfaces from further visibility tests
250
<View of a concave polyhedron with
one face partially hidden by other surfaces>
243. Characteristics
• Commonly used image-space approach
• Compares depths of each pixel on the projection plane
– Referred to as the z-buffer method
• Usually applied to scenes of polygonal surfaces
– Depth values can be computed very quickly
– Easy to implement
252
Yv
Xv
Zv
S1
S2
S3
(x, y)
244. Depth Buffer & Refresh Buffer
• Two buffer areas are required
– Depth buffer
• Store depth values for each (x, y) position
• All positions are initialized to minimum depth
– Usually 0 – most distant depth from the viewplane
– Refresh buffer
• Stores the intensity values for each position
• All positions are initialized to the background
intensity
253
245. Algorithm
• Initialize the depth buffer and refresh buffer
depth(x, y) = 0, refresh(x, y) = Ibackgnd
• For each position on each polygon surface
– Calculate the depth for each (x, y) position on the polygon
– If z > depth(x, y), then set
depth(x, y) = z, refresh(x, y) = Isurf(x, y)
• Advanced
– With resolution of 1024 by 1024
• Over a million positions in the depth buffer
– Process one section of the scene at a time
• Need a smaller depth buffer
• The buffer is reused for the next section
254
247. Characteristics
• An extension of the ideas in the depth-buffer method
• The origin of this name
– At the other end of the alphabet from “z-buffer”
– Antialiased, area-averaged, accumulation-buffer
– Surface-rendering system developed by ‘Lucasfilm’
• REYES(Renders Everything You Ever Saw)
• A drawback of the depth-buffer method
– Deals only with opaque surfaces
– Can’t accumulate intensity values
for more than one surface
256
Foreground
transparent surface
Background
opaque surface
248. Algorithm(1 / 2)
• Each position in the buffer can reference a linked list of
surfaces
– Several intensities can be considered at each pixel position
– Object edges can be antialiased
• Each position in the A-buffer has two fields
– Depth field
• Stores a positive or negative real number
– Intensity field
• Stores surface-intensity information or a pointer value
257
d >
0
I d
< 0
Sur
f1
Sur
f2
Depth
field
Intensity
field
Depth
field
Intensity
field
(a) (b)
<Organization of an A-buffer pixel position : (a) single-surface overlap (b) multiple-surface overlap>
249. Algorithm(2 / 2)
• If the depth field is positive
– The number at that position is the depth
– The intensity field stores the RGB
• If the depth field is negative
– Multiple-surface contributions to the pixel
– The intensity field stores a pointer to a linked list of surfaces
– Data for each surface in the linked list
258
RGB intensity components
Opacity parameters(percent of transparency)
Depth
Percent of area coverage
Surface identifier
Pointers to next surface
251. Characteristics
• Extension of the scan-line algorithm for
filling polygon interiors
– For all polygons intersecting each scan line
• Processed from left to right
• Depth calculations for each overlapping surface
• The intensity of the nearest position is entered into
the refresh buffer
260
252. Tables for The Various Surfaces
• Edge table
– Coordinate endpoints for each line
– Slope of each line
– Pointers into the polygon table
• Identify the surfaces bounded by each line
• Polygon table
– Coefficients of the plane equation for each surface
– Intensity information for the surfaces
– Pointers into the edge table
261
253. Active List & Flag
• Active list
– Contain only edges across the current scan
line
– Sorted in order of increasing x
• Flag for each surface
– Indicate whether inside or outside of the
surface
– At the leftmost boundary of a surface
• The surface flag is turned on
– At the rightmost boundary of a surface
262
254. Example
• Active list for scan line 1
– Edge table
• AB, BC, EH, and FG
• Between AB and BC, only
the flag for surface S1 is on
– No depth calculations are necessary
– Intensity for surface S1 is entered into the refresh buffer
• Similarly, between EH and FG, only the flag for S2
is on
263
xv
yv
A
B
S1
E
F
S2
H
D
C
G
Scan line 1
Scan line 2
Scan line 3
255. Example(cont.)
• For scan line 2, 3
– AD, EH, BC, and FG
• Between AD and EH, only the flag for S1 is on
• Between EH and BC, the flags for both surfaces
are on
– Depth calculation is needed
– Intensities for S1 are loaded into the refresh buffer until
BC
– Take advantage of coherence
• Pass from one scan line to next
• Scan line 3 has the same active list as scan line 2
• Unnecessary to make depth calculations between
EH and BC 264
256. Drawback
• Only if surfaces don’t cut through or
otherwise cyclically overlap each other
– If any kind of cyclic overlap is present
• Divide the surfaces
265
258. Operations
• Image-space and object-space operations
– Sorting operations in both image and object-
space
– The scan conversion of polygon surfaces in
image-space
• Basic functions
– Surfaces are sorted in order of decreasing
depth
– Surfaces are scan-converted in order,
starting with the surface of greatest depth
267
259. Algorithm
• Referred to as the painter’s algorithm
– In creating an oil painting
• First paints the background colors
• The most distant objects are added
• Then the nearer objects, and so forth
• Finally, the foregrounds are painted over all objects
– Each layer of paint covers up the previous layer
• Process
– Sort surfaces according to their distance from the viewplane
– The intensities for the farthest surface are then entered into the
refresh buffer
– Taking each succeeding surface in decreasing depth order
268
260. Overlapping Tests
• Tests for each surface that overlaps with S
– The bounding rectangle in the xy plane for the two surfaces do
not overlap (1)
– Surface S is completely behind the overlapping surface relative
to the viewing position (2)
– The overlapping surface is completely in front of S relative to the
viewing position (3)
– The projections of the two surfaces onto the viewplane do not
overlap (4)
• If all the surfaces pass at least one of the tests, none of
them is behind S
– No reordering is then necessary and S is scan converted
269
Easy
Difficult
262. Surface Reordering
• If all four tests fail with S’
– Interchange surfaces S and S’ in the sorted
list
– Repeat the tests for each surface that is
reordered in the list
271
zv
xv
S
S’
<S S’>
zv
xv
S
S’
S’’
<S S’’, then S’’ S’>
263. Drawback
• If two or more surfaces alternately
obscure each other
– Infinite loop
– Flag any surface that has been reordered to
a farther depth
• It can’t be moved again
– If an attempt to switch the surface a second
time
• Divide it into two parts to eliminate the cyclic loop
• The original surface is then replaced by the two
272
265. Characteristics
• Binary Space-Partitioning(BSP) Tree
• Determining object visibility by painting
surfaces onto the screen from back to
front
– Like the painter’s algorithm
• Particularly useful
– The view reference point changes
– The objects in a scene are at fixed positions
274
266. Process
• Identifying surfaces
– “inside” and “outside” the partitioning plane
• Intersected object
– Divide the object into two separate objects(A,
B)
275
P2
P1
C
D
A
B
front
front
back
back
P1
P2 P2
A C B D
front
front front
back
back back
268. Characteristics
• Takes advantage of area coherence
– Locating view areas that represent part of a single surface
– Successively dividing the total viewing area into smaller
rectangles
• Until each small area is the projection of part of a single visible
surface or no surface
– Require tests
• Identify the area as part of a single surface
• Tell us that the area is too complex to analyze easily
• Similar to constructing a quadtree
277
269. Process
• Staring with the total view
– Apply the identifying tests
– If the tests indicate that the view is sufficiently
complex
• Subdivide
– Apply the tests to each of the smaller areas
• Until belonging to a single surface
• Until the size of a single pixel
• Example
– With a resolution 1024 1024
• 10 times before reduced to a point
278
270. Identifying Tests
• Four possible relationships
– Surrounding surface
• Completely enclose the area
– Overlapping surface
• Partly inside and partly outside the area
– Inside surface
– Outside surface
• No further subdivisions are needed if one of the following conditions
is true
– All surface are outside surfaces with respect to the area
– Only one inside, overlapping, or surrounding surface is in the area
– A surrounding surface obscures all other surfaces within the area
boundaries from depth sorting, plane equation
279
Surrounding
Surface
Overlapping
Surface
Inside
Surface
Outside
Surface
272. Characteristics
• Extension of area-subdivision method
• Projecting octree nodes onto the
viewplane
– Front-to-back order Depth-first traversal
• The nodes for the front suboctants of octant 0 are
visited before the nodes for the four back
suboctants
• The pixel in the framebuffer is assigned that color
if no values have previously been stored
– Only the front colors are loaded
281
0
1
3
2
7
4
5
6
273. Displaying An Octree
• Map the octree onto a quadtree of visible
areas
– Traversing octree nodes from front to back in
a recursive procedure
– The quadtree representation for the
visible surfaces is loaded into the
framebuffer
282
1
3
2
7
4
5
6
0
Octants in Space
275. Characteristics
• Based on geometric optics methods
– Trace the paths of light rays
• Line of sight from a pixel position on the viewplane through a scene
• Determine which objects intersect this line
• Identify the visible surface whose intersection point is closest to the
pixel
– Infinite number of light rays
• Consider only rays that pass through pixel positions
– Trace the light-ray paths backward from the pixels
• Effective visibility-detection method
– For scenes with curved surfaces
284
278. Abstract
• Effective methods for curved surfaces
– Ray-casting
– Octree methods
• Approximate a curved surface as a set of
plane, polygon surfaces
– Use one of the other hidden-surface methods
– More efficient as well as more accurate than
using ray casting and the curved-surface
equation
287
279. Curved-Surface
Representations
• Implicit equation of the form
• Parametric representation
• Explicit surface equation
– Useful for some cases
• A height function over an xy ground plane
• Scan-line and ray-casting algorithms
– Involve numerical approximation techniques
288
0
)
,
,
(
z
y
x
f
)
,
( y
x
f
z
280. Surface Contour Plots
• Display a surface function with a set of
contour lines that show the surface shape
– Useful in math, physics, engineering, ...
• With an explicit representation
– Plot the visible-surface contour lines
– To obtain an xy plot
• Plotted for values of z
• Using a specified interval z
289
)
,
( z
x
f
y
<Color-coded surface contour plot>
282. Characteristics
• In wireframe display
– Visibility tests are applied to surface edges
– Visible edge sections are displayed
– Hidden edge sections can be eliminated or
displayed differently from the visible edges
• Procedures for determining visibility of
edges
– Wireframe-visibility(Visible-line detection,
Hidden-line detection) methods
291
283. Wireframe Visibility Methods
• Compare each line to each surface
– Direct approach to identifying the visible lines
– Depth values are compared to the surfaces
– Use coherence methods
• No actual testing each coordinate
• With depth-sorting
– Interiors are in the background color
– Boundaries are in the foreground color
– Processing the surfaces from back to front
• Hidden lines are erased by the nearer surfaces
292
284. Comparison(1 / 2)
• Back-face detection methods
– Fast and effective as an initial screening
• Eliminate many polygons from further visibility
tests
– In general, this can’t completely identify all
hidden surfaces
• Depth-buffer(z-buffer) method
– Fast and simple
– Two buffers
• Refresh buffer for the pixel intensities
• Depth buffer for the depth of the visible surface
293
285. Comparison(2 / 2)
• A-buffer method
– An improvement on the depth-buffer approach
– Additional information
• Antialiased and transparent surfaces
• Other visible-surface detection schemes
– Scan-line method
– Depth-sorting method(painter’s algorithm)
– BSP-tree method
– Area subdivision method
– Octree methods
– Ray casting
294
286. References
• Donald Hearn, M. Pauline
Baker(Computer Graphics)
• Junglee graphics laboratory,korea
university