尊敬的 微信汇率:1円 ≈ 0.046166 元 支付宝汇率:1円 ≈ 0.046257元 [退出登录]
SlideShare a Scribd company logo
IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661,p-ISSN: 2278-8727, Volume 18, Issue 1, Ver. I (Jan – Feb. 2016), PP 09-15
www.iosrjournals.org
DOI: 10.9790/0661-18110915 www.iosrjournals.org 9 | Page
A Preprocessing Scheme for Line Detection with the Hough
Transform for Mobile Robot Self-Navigation
Gideon Kanji Damaryam1
, Haruna Abdu2
1
(Department of Computer Science, Federal University, Lokoja, Nigeria)
2
(Department of Computer Science, Federal University, Lokoja, Nigeria)
Abstract: This paper presents the pre-processing scheme used for a vision system for a self-navigating mobile
robot which relies on straight line detection using the Straight Line Hough transform. The straight line Hough
transform is an Image Processing technique for detection of straight lines in an image by transforming points in
the image to another image in a way that accumulates evidence that the points from the original image are
constituents of a straight line type feature from the original image. The pre-processing presented includes image
re-sizing, conversion to gray scale, edge detection using the Sobel edge-detection filters, and edge thinning with
a newly developed method that is a slight modification of an existing method. The newly developed method has
been found to yield thinned images more suitable for later stages of this work than other thinning methods.
Output from the pre-processing scheme presented is used as input for the remainder of the vision-based self-
navigation system.
Keywords: Edge-detection, Hough transform, Image Processing, Machine vision, Pre-processing
I. Introduction
This paper describes an image pre-processing scheme, which transforms an image captured by a
camera mounted on a mobile robot, into a representative binary image optimized for straight-line detection
using the Hough transform. Straight lines are detected as part of a vision systemfor a mobile robot, which works
by detecting and interpreting lines and end-point of lines to find navigationally important features.Detection of
lines is detailed in [1] and determination of end-points of lines is detailed in [2]. The vision system is part of a
self-navigation system intended for use by a small mobile robot within a rectilinear indoor environment such as
a University faculty building. The full systemis described in [3].
Hough transforms are used for detection of features such as lines, curves and simple shapes within
images. They work by transforming potential parts of a target feature in a given image topoints in a new image
while accumulating measures of the likelihood that points in the new image aredue to features of the required
type from the original image. When the transformation is complete, points in the new image can then be
subjected to a predefined threshold so that points that are very likely to be due to the required kind of feature can
be selected and the original features identified by reversing the transformation process. The Hough transform
used depends on the feature to be detected. As the work that is the basis of this paper is concerned with the
detection of straight lines, it prepares images for the transform called the straight line Hough transform, which
transforms points, being potential components of lines, in the original image to curves in a new image. The
number of curves that intersect at a particular point in the new image is a measure of the likelihood that the
points from the original image whose transform curves intersect at the intersecting points in the new image,
were points forming a straight line in the original image. The line is defined by values of the transform
parameters which can be read off the axes of the new image. This way, lines in the original image can be
detected. For brevity, in this work, the straight line Hough transform is simply referred to as the Hough
transform, as is commonly done. Further information about the Hough transform is available in several
publications including [4] and[3].
In the context of the Hough transform, the processing required to transform an image captured by a
camera (a raw image), to an image optimized for application of the Hough transform is referred to as pre-
processing. The pre-processing tasks presented in this paper include resizing of the captured image, edge-
detection and edge-thinning.Image capture is first discussed also, although it is not part of pre-processing in a
very strict sense.
II. Capture, Resizing and Conversion to Gray Scale
2.1 Capture-Process-Navigate Cycle
To achieve vision-based navigation, it is necessary to capture, and process an image, and then effect
navigation on the basis of the result of the processing. This cycle is repeated until a predefined navigation
programme is completely executed, or the entire navigation process is otherwise terminated.
A Preprocessing Scheme for Line Detection with the Hough Transform for Mobile Robot…
DOI: 10.9790/0661-18110915 www.iosrjournals.org 10 | Page
2.2 Capture
In this work, images were captured using a single forward-facing camera. It was ensured that there was
sufficient light to clearly identify separate features in the images such as walls, floors and doors. The base of the
camera was set up parallel to the floor.
2.3 Resizing
A standard image size was chosen to give a good compromise between usefulness of output and
processing time. The reduced image size chosen was 128 x 96 pixels. When this size of image is fully
processed, fairly fine details such as the two edges of a door on the side of a corridor can be extracted, yet the
time for processing the image is not prohibitive.
Other image sizes were tried. These included 32x32 pixels and 64x48 pixels. In both cases, the level of
detail available when the image is fully processed is limited and means that higher level post-processes to
interpret the results do not have adequate input. A feature such as a door that is noticeable to a human observer
in an image can be reduced to a single line if the image is reduced to a 32 x 32 size, and so the door cannot be
picked up as a door by the post-processing for detecting doors, for example. Fig. 1 shows the various types of
results for a typical image. Fig. 1a is the original image magnified by 2.67, figure 1b is the 32 x 32 thinned
version magnified by 8, figure 1c is the 64 x 64 version magnified by 4 and figure 1d is the 128 x 96 version
again magnified by 2.67. The door circled in figure 1a has no chance of being picked up as a door in the 32 x 32
thinned image because it almost doesn’t appear, and in the 64 x 64 thinned image because it appears as a single
line.
Also, although a square aspect ratio was considered, a 4:3 aspect ratio was selected as the cameras used
all captured in 4:3 ratio and changing the ratio led to unnecessary loss of information from the sides as shownin
Fig. 1 below.
Figure 1Effects of various image sizes
(Top left) Original image magnified by 2.67(Top right) 32 x 32 thinned image magnified 8
times(Bottom left)64 x 64 thinned image magnified 4 times(Bottom right)128 x 96 thinned image magnified by
2.67
Depending on the target features, the algorithms used for detecting them, and the importance of
timeliness for specific applications, other image resolutions can be used. [5] Used a 30 x 32 sized grey-scale
A Preprocessing Scheme for Line Detection with the Hough Transform for Mobile Robot…
DOI: 10.9790/0661-18110915 www.iosrjournals.org 11 | Page
image as input to a neural network for the purpose of navigating a robot to avoid moving obstacles and turning
into junctions. [6]Reduced512 x 480 sized images to 64 x 60 sized images in their Corridor Follower module
and then usedthoseas input for the Hough transform. They then used the resulting Hough space as input to a
neural network.They report that the reduction in size has no noticeable effects on the performance of the
module.
[7]resizedcaptured images of size512 x 512 to a 256 x 256 size – a higher resolution than the images in
the current work - and usedthem to locate a docking station using an algorithm that requires up to 5 runs of the
Hough transform. They report very high processing times (up to 10 minutes) however.
2.4 Intensity Determination
The camera used for this work captures coloured images. These are stored as image objects that have
information about the levels of primary colors (red, blue and green) at every point of the image. For edge-
detection to commence, it is necessary to determine the intensity at each point. This is done by extracting the
level of each of the three colours and determining the average at each point.
2.5 Image Point Indexing
Points in images are labeled with identification codes as illustrated in Fig. 2. The point at the top-left
position is labeled 0. Subsequent points going right are labeled with consecutive numbers until the end of the
row. The labeling is continued on the next row fromthe left.
Figure 2: Image points indexing
III. Edge Detection
Edge-detection is the first pre-processing step implemented after an image of the right size has been
obtained. It yields an edge-image by plotting lines connecting points where there are significant changes in pixel
intensity, and which can therefore be taken as indications of edges of features in the image[8]. An edge image,
ideally, contains lines that outline features in the original image.
With the intensities in the grey-scale image determined as discussed in 2.4 Intensity Determination, a
filter is applied across the image, which works out for each point in the image, the possibility that the point is an
edge. A threshold, selection of which is a task in itself, is then applied to select points with high possibilities of
being edge points.
The Sobel edge-detection filters were chosen for this work. Other edge-detection filters and techniques
exist. One example is using the Laplacian edge-detection filters which havealsobeen reported to be accurate for
detecting edges which are very gradual [8]. The Sobel filters were chosen for this work because not only do they
provide a measure of magnitude for gradients of edges which were found to be good enough for images of the
type used in this work, they also provide angles for the gradients that are used in some thinning algorithms,
including the one used in this work. Thinning in this work is discussed shortly in IV. Edge Thinning. A fuller
discussion on the Sobel filters is available in [8], as well as several other resources.
The Sobel filters are two 3 x 3 matrices, ver
M and hor
M . These are applied across images. Mveris
designed to find vertical edges and Mhor is designed to find horizontal edges.
ver
M is defined as:
A Preprocessing Scheme for Line Detection with the Hough Transform for Mobile Robot…
DOI: 10.9790/0661-18110915 www.iosrjournals.org 12 | Page














101
202
101
ver
M . . . . . (1)
and hor
M defined as:









 

121
000
121
hor
M . . . . . (2)
The filters yield a measure of the possibility that there is a vertical and a horizontal edge, respectively,
at a given point. These measures are called gradient magnitudes. The two gradient magnitudes, ver
gm and
hor
gm , are obtained by convolution of the respective filters with the image I :
IMgm verver
* . . . . . (3)
IMgm horhor
* . . . . . (4)
The two are then summed to give an overall gradient magnitude, 𝑔𝑚 for the point:
horver
gmgmgm  . . . . . (5)
The Sobel filters also provide an estimate of the angle,  of the gradient. This is simply the arc tangent of the
horizontal gradient magnitude divided by the vertical gradient magnitude:










ver
hor
gm
gm1
tan . . . . . (6)
3.1 Edge Threshold determination
Once gradient magnitudes have been determined, the next stage in edge-detection is deciding from the
gradient magnitudes, which points are edge points and which ones are not. This involves application of a
threshold. This work has developed a scheme where, rather than assign a fixed threshold for determining edges,
a target is provided of the number of edge points required. The following algorithmis then used to work out
what threshold will result in getting a number of edges equal to, or a little more than that specified:
1. Determine maximum gradient magnitude, M , fromthe array of gradient magnitudes GM
2. Determine minimum gradient magnitude, m , from the array of gradient magnitudes GM
3. Determine range of gradient magnitudes, R , using 1 mMR
4. Determine target number of non-edge points, 'N , as the difference between total number of points, N ,
and target number of peaks, 'T , i.e. '' TNN 
5. Determine the number of elements of GM having value a for each a where m ≤ Ma  and store
each as a
G
6. Initialize a counting variable i to , and set i
S , the𝑀𝑡ℎ
cumulative sum, to m
G
7. Reduce i by 1
8. Add previous cumulative group count to current group count to get current cumulative group count,
i.e.,𝑆𝑖 = 𝑆𝑖+1 + 𝐺𝑖
9. If current cumulative group sum, i
S , is equal to or greater than target number of peaks,𝑁, do 10, else go
back to 7
10. Set threshold to the current count and stop
The gradient magnitudes determined by application of the Sobel edge-detection filters, provides input
for this algorithm.
M
A Preprocessing Scheme for Line Detection with the Hough Transform for Mobile Robot…
DOI: 10.9790/0661-18110915 www.iosrjournals.org 13 | Page
3.2 Sample Edge Detection Result
Sample results are shown in Fig. 3. Fig. 3a is a typical image, and Fig. 3b is the same image after it has
been converted to grey-scale and Sobel edge detection has been applied to it.
Figure 3a: Sample Image Figure 3b: Sample Image after Sobel Edge Detection
Figure 3: Sample Sobel edge detection results
IV. Edge Thinning
Edge-detection often yields edges several pixels thick. This can make further processing of the image
unnecessarily processing time and memory consuming, and “distracts” feature detection processes from
important but salient features of the image. The objective of edge thinning is to reduce edges to unit thickness
without losing any information about the connectedness of edges or introducing any form of distortion to the
image.
Several thinning algorithms exist. The most popular method is the non-maximum suppression method.
This method works by removing edge responses that are not maximal across each section of the edge direction
in their local neighbourhood. However, the result of this method is still under-thinned in some places and
removes real edges in other places [9].
[9]have proposed another method based on comparing gradient magnitudes within 3 x 3
neighbourhoods. It produces more accurate results than the non-maximum suppression method, and also has the
added advantage of minimizing the use of the edge direction, which introduces a lot of arc tangent calculations.
This work found that the method of [9]produces very good thin edges except that sometimes it loses
information about edges that are significant in the context of the original image, and that would also be helpful
for robot navigation. A slight modification has been proposed to step 1 of their method that has solved this
problem.
Steps 0 and 1 of their method follows:
Step 0: Select an unprocessed edge point
Step 1: Determine number of edge points, n , in the immediate neighbourhood of the current point.
If 2n , set current point to a non-edge point, i.e., consider as noise
else, go to step 2.
The modification to step 1 is:
Step 1: Determine number of edge points, n , in the immediate neighbourhood of the current point.
If 0n , set current point to a non-edge point.
If 1n , then find out the number of neighbouring edge points, nn , of the 1 neighbour.
If 1nn , the current edge point is maintained otherwise it is made a non-edge
point.
If 2n , maintain as edge point
If 2n , go to step 2.
Further processing is done exactly according to step 2 and further steps described in [9].
4.1 Sample Edge Thinning Result
Fig. 4 shows theComparison of the results of the thinning method of[9]and the modified version of it
used in this work. Fig. 4a is a sample image. Its edge image is shown in Fig. 4b after the application of the Sobel
operators. The results of the algorithm of [9]are shown in Fig.4c and the results of the modification by the
A Preprocessing Scheme for Line Detection with the Hough Transform for Mobile Robot…
DOI: 10.9790/0661-18110915 www.iosrjournals.org 14 | Page
current work is shown in Fig. 4d. Although the method of[9]results in a cleaner result, it loses important lines
such as the door border highlighted in Fig. 4b.
Figure 4a: Sample Image Figure 4b: Sample image after application of the
Sobel Operator
Figure 4c: Sample image thinned with Figure 4d: Sample image thinned with modification
the method of [9] to the method of [9] by this work
Figure 4 Comparison of the results of the thinning method of [9]and the modified version of it used in this work
V. Conclusion
In conclusion, this paper presented the pre-processing scheme used for a vision system for a self-
navigating mobile robot which relies on straight line detection using the straight line Hough transform, as part of
a bigger process of mobile robot self-navigation based on visual data. The scheme starts with image capture by a
camera mounted on a mobile robot andends with a representative binary image optimized for straight-line
detection using the Hough transform.It includes image re-sizing, conversion to gray scale, edge detection using
the Sobel edge-detection filters, and edge thinning with a newly developed method that is a slight modification
of the method of [9].
The newly developed thinning method has been found to yield thinned images more suitable for later
stages of the capture-process-navigate cycle of this work. It enabled detection of more navigationally important
features at later stages of the overall vision system, and is more accurate than other thinning methods such as
non-maximal suppression commonly used, while minimizing the use of processor intensive functions such as
arctangent calculations. It relies on the gradient magnitudes and angles provided by edge-detection using the
Sobel filters. Other edge-detection methods, for example using the Laplacian edge-detection filters, do not
provide both of these.
Threshold for determination of edges after application of the Sobel filters, was chosen automatically by
targeting a fixed number of edges. This works for this application as images are generally similar. This would
not work for applications where images varied a lot.
The size chosen for images in the schema presented is also a direct result of the nature of the specific
application in question. Other applications would most likely do better with other image sizes.
Output from the pre-processing scheme presented provides input for the remainder of the vision-based
self-navigation system for a mobile robot, which works by detecting and interpreting lines to find navigationally
important features.
A Preprocessing Scheme for Line Detection with the Hough Transform for Mobile Robot…
DOI: 10.9790/0661-18110915 www.iosrjournals.org 15 | Page
Acknowledgement
This paper discusses work that was funded by the School of Engineering of the Robert Gordon
University, Aberdeen in the United Kingdom, and was done in their lab using their robot.
References
[1]. G. K. Damaryam,A Hough Transform Implementation for Line Detection for a Mobile Robot Self-Navigation System, International
Organisation for Scientific Research – Journal of Computer Engineering, 17(6), 2015
[2]. G. K. Damaryam, A Method to Determine End-Points of Straight Lines Detected using the Hough Transform, International Journal
of Engineering Research and Applications, 6(1), 2016
[3] G. K. Damaryam, Visions Systems for A Mobile Robot basedon Line Detection using the Hough Transform and Artificial Neural
Networks, doctoral diss., Robert Gordon University, Aberdeen, United Kingdom, 2008.
[4] P. Hough, MethodandMeans for Recognising Complex Patterns, United State of America Patent 3069654, 1962.
[5] R. M.Inigoand R. E. Torres, Mobile Robot Navigation with Vision Based Neural Networks, Proc. SPIE 2352, Mobile Robots IX,
2353, 68-79, 1995.
[6] X. Yun, K Latt and G. J. Scott, Mobile Robot Localization using the Hough Transform and Neural Networks. Proc.,IEEE
International Symposium on Intelligent Control, Gaithersburg, MD, 1998, 393 – 400.
[7] D. L. Vaughn and R. C. Arkin,Workstation Recognition using a Constrained Edge-based Hough Transform for Mobile Robot
Navigation, 1990.
[8] V. F. Leavers,Shape Detection in Computer Vision Usingthe Hough Transform(London: Springer-Verlag, 1992).
[9] J. Park and H. Chen and S. T. Huang, A new gray level edge thinning method. Proc., ISCA 13th International Conference,
Computer Applications in Industry andEngineering, Honolulu, HI, USA, 2000.

More Related Content

What's hot

Medial axis transformation based skeletonzation of image patterns using image...
Medial axis transformation based skeletonzation of image patterns using image...Medial axis transformation based skeletonzation of image patterns using image...
Medial axis transformation based skeletonzation of image patterns using image...
International Journal of Science and Research (IJSR)
 
Property based fusion for multifocus images
Property based fusion for multifocus imagesProperty based fusion for multifocus images
Property based fusion for multifocus images
IAEME Publication
 
20120140502012
2012014050201220120140502012
20120140502012
IAEME Publication
 
Comparison of various Image Registration Techniques with the Proposed Hybrid ...
Comparison of various Image Registration Techniques with the Proposed Hybrid ...Comparison of various Image Registration Techniques with the Proposed Hybrid ...
Comparison of various Image Registration Techniques with the Proposed Hybrid ...
idescitation
 
Empirical Coding for Curvature Based Linear Representation in Image Retrieval...
Empirical Coding for Curvature Based Linear Representation in Image Retrieval...Empirical Coding for Curvature Based Linear Representation in Image Retrieval...
Empirical Coding for Curvature Based Linear Representation in Image Retrieval...
iosrjce
 
A comparative analysis of retrieval techniques in content based image retrieval
A comparative analysis of retrieval techniques in content based image retrievalA comparative analysis of retrieval techniques in content based image retrieval
A comparative analysis of retrieval techniques in content based image retrieval
csandit
 
Texture based feature extraction and object tracking
Texture based feature extraction and object trackingTexture based feature extraction and object tracking
Texture based feature extraction and object tracking
Priyanka Goswami
 
Automatic rectification of perspective distortion from a single image using p...
Automatic rectification of perspective distortion from a single image using p...Automatic rectification of perspective distortion from a single image using p...
Automatic rectification of perspective distortion from a single image using p...
ijcsa
 
Real-time Multi-object Face Recognition Using Content Based Image Retrieval (...
Real-time Multi-object Face Recognition Using Content Based Image Retrieval (...Real-time Multi-object Face Recognition Using Content Based Image Retrieval (...
Real-time Multi-object Face Recognition Using Content Based Image Retrieval (...
IJECEIAES
 
Fuzzy Region Merging Using Fuzzy Similarity Measurement on Image Segmentation
Fuzzy Region Merging Using Fuzzy Similarity Measurement  on Image Segmentation  Fuzzy Region Merging Using Fuzzy Similarity Measurement  on Image Segmentation
Fuzzy Region Merging Using Fuzzy Similarity Measurement on Image Segmentation
IJECEIAES
 
An Improved Way of Segmentation and Classification of Remote Sensing Images U...
An Improved Way of Segmentation and Classification of Remote Sensing Images U...An Improved Way of Segmentation and Classification of Remote Sensing Images U...
An Improved Way of Segmentation and Classification of Remote Sensing Images U...
ijsrd.com
 
Evaluation of Texture in CBIR
Evaluation of Texture in CBIREvaluation of Texture in CBIR
Evaluation of Texture in CBIR
Zahra Mansoori
 
A novel approach to Image Fusion using combination of Wavelet Transform and C...
A novel approach to Image Fusion using combination of Wavelet Transform and C...A novel approach to Image Fusion using combination of Wavelet Transform and C...
A novel approach to Image Fusion using combination of Wavelet Transform and C...
IJSRD
 
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...
Zahra Mansoori
 
Implementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time VideoImplementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time Video
IDES Editor
 
PC-based Vision System for Operating Parameter Identification on a CNC Machine
PC-based Vision System for Operating Parameter Identification on a CNC MachinePC-based Vision System for Operating Parameter Identification on a CNC Machine
PC-based Vision System for Operating Parameter Identification on a CNC Machine
IDES Editor
 
Ug 205-image-retrieval-using-re-ranking-algorithm-11
Ug 205-image-retrieval-using-re-ranking-algorithm-11Ug 205-image-retrieval-using-re-ranking-algorithm-11
Ug 205-image-retrieval-using-re-ranking-algorithm-11
Ijcem Journal
 
COLOUR BASED IMAGE SEGMENTATION USING HYBRID KMEANS WITH WATERSHED SEGMENTATION
COLOUR BASED IMAGE SEGMENTATION USING HYBRID KMEANS WITH WATERSHED SEGMENTATIONCOLOUR BASED IMAGE SEGMENTATION USING HYBRID KMEANS WITH WATERSHED SEGMENTATION
COLOUR BASED IMAGE SEGMENTATION USING HYBRID KMEANS WITH WATERSHED SEGMENTATION
IAEME Publication
 
Improving image resolution through the cra algorithm involved recycling proce...
Improving image resolution through the cra algorithm involved recycling proce...Improving image resolution through the cra algorithm involved recycling proce...
Improving image resolution through the cra algorithm involved recycling proce...
csandit
 

What's hot (19)

Medial axis transformation based skeletonzation of image patterns using image...
Medial axis transformation based skeletonzation of image patterns using image...Medial axis transformation based skeletonzation of image patterns using image...
Medial axis transformation based skeletonzation of image patterns using image...
 
Property based fusion for multifocus images
Property based fusion for multifocus imagesProperty based fusion for multifocus images
Property based fusion for multifocus images
 
20120140502012
2012014050201220120140502012
20120140502012
 
Comparison of various Image Registration Techniques with the Proposed Hybrid ...
Comparison of various Image Registration Techniques with the Proposed Hybrid ...Comparison of various Image Registration Techniques with the Proposed Hybrid ...
Comparison of various Image Registration Techniques with the Proposed Hybrid ...
 
Empirical Coding for Curvature Based Linear Representation in Image Retrieval...
Empirical Coding for Curvature Based Linear Representation in Image Retrieval...Empirical Coding for Curvature Based Linear Representation in Image Retrieval...
Empirical Coding for Curvature Based Linear Representation in Image Retrieval...
 
A comparative analysis of retrieval techniques in content based image retrieval
A comparative analysis of retrieval techniques in content based image retrievalA comparative analysis of retrieval techniques in content based image retrieval
A comparative analysis of retrieval techniques in content based image retrieval
 
Texture based feature extraction and object tracking
Texture based feature extraction and object trackingTexture based feature extraction and object tracking
Texture based feature extraction and object tracking
 
Automatic rectification of perspective distortion from a single image using p...
Automatic rectification of perspective distortion from a single image using p...Automatic rectification of perspective distortion from a single image using p...
Automatic rectification of perspective distortion from a single image using p...
 
Real-time Multi-object Face Recognition Using Content Based Image Retrieval (...
Real-time Multi-object Face Recognition Using Content Based Image Retrieval (...Real-time Multi-object Face Recognition Using Content Based Image Retrieval (...
Real-time Multi-object Face Recognition Using Content Based Image Retrieval (...
 
Fuzzy Region Merging Using Fuzzy Similarity Measurement on Image Segmentation
Fuzzy Region Merging Using Fuzzy Similarity Measurement  on Image Segmentation  Fuzzy Region Merging Using Fuzzy Similarity Measurement  on Image Segmentation
Fuzzy Region Merging Using Fuzzy Similarity Measurement on Image Segmentation
 
An Improved Way of Segmentation and Classification of Remote Sensing Images U...
An Improved Way of Segmentation and Classification of Remote Sensing Images U...An Improved Way of Segmentation and Classification of Remote Sensing Images U...
An Improved Way of Segmentation and Classification of Remote Sensing Images U...
 
Evaluation of Texture in CBIR
Evaluation of Texture in CBIREvaluation of Texture in CBIR
Evaluation of Texture in CBIR
 
A novel approach to Image Fusion using combination of Wavelet Transform and C...
A novel approach to Image Fusion using combination of Wavelet Transform and C...A novel approach to Image Fusion using combination of Wavelet Transform and C...
A novel approach to Image Fusion using combination of Wavelet Transform and C...
 
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...
 
Implementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time VideoImplementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time Video
 
PC-based Vision System for Operating Parameter Identification on a CNC Machine
PC-based Vision System for Operating Parameter Identification on a CNC MachinePC-based Vision System for Operating Parameter Identification on a CNC Machine
PC-based Vision System for Operating Parameter Identification on a CNC Machine
 
Ug 205-image-retrieval-using-re-ranking-algorithm-11
Ug 205-image-retrieval-using-re-ranking-algorithm-11Ug 205-image-retrieval-using-re-ranking-algorithm-11
Ug 205-image-retrieval-using-re-ranking-algorithm-11
 
COLOUR BASED IMAGE SEGMENTATION USING HYBRID KMEANS WITH WATERSHED SEGMENTATION
COLOUR BASED IMAGE SEGMENTATION USING HYBRID KMEANS WITH WATERSHED SEGMENTATIONCOLOUR BASED IMAGE SEGMENTATION USING HYBRID KMEANS WITH WATERSHED SEGMENTATION
COLOUR BASED IMAGE SEGMENTATION USING HYBRID KMEANS WITH WATERSHED SEGMENTATION
 
Improving image resolution through the cra algorithm involved recycling proce...
Improving image resolution through the cra algorithm involved recycling proce...Improving image resolution through the cra algorithm involved recycling proce...
Improving image resolution through the cra algorithm involved recycling proce...
 

Viewers also liked

G012655365
G012655365G012655365
G012655365
IOSR Journals
 
J010435966
J010435966J010435966
J010435966
IOSR Journals
 
D017131318
D017131318D017131318
D017131318
IOSR Journals
 
U01761147151
U01761147151U01761147151
U01761147151
IOSR Journals
 
M0104198105
M0104198105M0104198105
M0104198105
IOSR Journals
 
B017310612
B017310612B017310612
B017310612
IOSR Journals
 
M017548895
M017548895M017548895
M017548895
IOSR Journals
 
Role of soluble urokinase plasminogen activator receptor (suPAR) as prognosis...
Role of soluble urokinase plasminogen activator receptor (suPAR) as prognosis...Role of soluble urokinase plasminogen activator receptor (suPAR) as prognosis...
Role of soluble urokinase plasminogen activator receptor (suPAR) as prognosis...
IOSR Journals
 
N018138696
N018138696N018138696
N018138696
IOSR Journals
 
G1802024651
G1802024651G1802024651
G1802024651
IOSR Journals
 
E017552629
E017552629E017552629
E017552629
IOSR Journals
 
M1803037881
M1803037881M1803037881
M1803037881
IOSR Journals
 
F011114153
F011114153F011114153
F011114153
IOSR Journals
 
U01725129138
U01725129138U01725129138
U01725129138
IOSR Journals
 
D010631621
D010631621D010631621
D010631621
IOSR Journals
 
M1303038998
M1303038998M1303038998
M1303038998
IOSR Journals
 
Investigation of Reducing Process of Uneven Shade Problem In Case Of Compact ...
Investigation of Reducing Process of Uneven Shade Problem In Case Of Compact ...Investigation of Reducing Process of Uneven Shade Problem In Case Of Compact ...
Investigation of Reducing Process of Uneven Shade Problem In Case Of Compact ...
IOSR Journals
 
H012664448
H012664448H012664448
H012664448
IOSR Journals
 
F1303063449
F1303063449F1303063449
F1303063449
IOSR Journals
 
E013162126
E013162126E013162126
E013162126
IOSR Journals
 

Viewers also liked (20)

G012655365
G012655365G012655365
G012655365
 
J010435966
J010435966J010435966
J010435966
 
D017131318
D017131318D017131318
D017131318
 
U01761147151
U01761147151U01761147151
U01761147151
 
M0104198105
M0104198105M0104198105
M0104198105
 
B017310612
B017310612B017310612
B017310612
 
M017548895
M017548895M017548895
M017548895
 
Role of soluble urokinase plasminogen activator receptor (suPAR) as prognosis...
Role of soluble urokinase plasminogen activator receptor (suPAR) as prognosis...Role of soluble urokinase plasminogen activator receptor (suPAR) as prognosis...
Role of soluble urokinase plasminogen activator receptor (suPAR) as prognosis...
 
N018138696
N018138696N018138696
N018138696
 
G1802024651
G1802024651G1802024651
G1802024651
 
E017552629
E017552629E017552629
E017552629
 
M1803037881
M1803037881M1803037881
M1803037881
 
F011114153
F011114153F011114153
F011114153
 
U01725129138
U01725129138U01725129138
U01725129138
 
D010631621
D010631621D010631621
D010631621
 
M1303038998
M1303038998M1303038998
M1303038998
 
Investigation of Reducing Process of Uneven Shade Problem In Case Of Compact ...
Investigation of Reducing Process of Uneven Shade Problem In Case Of Compact ...Investigation of Reducing Process of Uneven Shade Problem In Case Of Compact ...
Investigation of Reducing Process of Uneven Shade Problem In Case Of Compact ...
 
H012664448
H012664448H012664448
H012664448
 
F1303063449
F1303063449F1303063449
F1303063449
 
E013162126
E013162126E013162126
E013162126
 

Similar to B018110915

V.KARTHIKEYAN PUBLISHED ARTICLE 1
V.KARTHIKEYAN PUBLISHED ARTICLE 1V.KARTHIKEYAN PUBLISHED ARTICLE 1
V.KARTHIKEYAN PUBLISHED ARTICLE 1
KARTHIKEYAN V
 
Digital image forgery detection
Digital image forgery detectionDigital image forgery detection
Digital image forgery detection
AB Rizvi
 
A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...
A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...
A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...
iosrjce
 
F017663344
F017663344F017663344
F017663344
IOSR Journals
 
Image fusion using nsct denoising and target extraction for visual surveillance
Image fusion using nsct denoising and target extraction for visual surveillanceImage fusion using nsct denoising and target extraction for visual surveillance
Image fusion using nsct denoising and target extraction for visual surveillance
eSAT Publishing House
 
Jc3416551658
Jc3416551658Jc3416551658
Jc3416551658
IJERA Editor
 
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
cscpconf
 
QR code decoding and Image Preprocessing
QR code decoding and Image Preprocessing QR code decoding and Image Preprocessing
QR code decoding and Image Preprocessing
Hasini Weerathunge
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
ijma
 
AN EFFICIENT FEATURE EXTRACTION AND CLASSIFICATION OF HANDWRITTEN DIGITS USIN...
AN EFFICIENT FEATURE EXTRACTION AND CLASSIFICATION OF HANDWRITTEN DIGITS USIN...AN EFFICIENT FEATURE EXTRACTION AND CLASSIFICATION OF HANDWRITTEN DIGITS USIN...
AN EFFICIENT FEATURE EXTRACTION AND CLASSIFICATION OF HANDWRITTEN DIGITS USIN...
IJCSEA Journal
 
Image processing
Image processingImage processing
Image processing
kamal330
 
IRJET- Image Feature Extraction using Hough Transformation Principle
IRJET- Image Feature Extraction using Hough Transformation PrincipleIRJET- Image Feature Extraction using Hough Transformation Principle
IRJET- Image Feature Extraction using Hough Transformation Principle
IRJET Journal
 
Wavelet-Based Warping Technique for Mobile Devices
Wavelet-Based Warping Technique for Mobile DevicesWavelet-Based Warping Technique for Mobile Devices
Wavelet-Based Warping Technique for Mobile Devices
csandit
 
Research Paper v2.0
Research Paper v2.0Research Paper v2.0
Research Paper v2.0
Kapil Tiwari
 
E017322833
E017322833E017322833
E017322833
IOSR Journals
 
A Review of Optical Character Recognition System for Recognition of Printed Text
A Review of Optical Character Recognition System for Recognition of Printed TextA Review of Optical Character Recognition System for Recognition of Printed Text
A Review of Optical Character Recognition System for Recognition of Printed Text
iosrjce
 
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
IRJET Journal
 
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...
IJTET Journal
 
IRJET- Image based Approach for Indian Fake Note Detection by Dark Channe...
IRJET-  	  Image based Approach for Indian Fake Note Detection by Dark Channe...IRJET-  	  Image based Approach for Indian Fake Note Detection by Dark Channe...
IRJET- Image based Approach for Indian Fake Note Detection by Dark Channe...
IRJET Journal
 
Final Paper
Final PaperFinal Paper
Final Paper
Nicholas Chehade
 

Similar to B018110915 (20)

V.KARTHIKEYAN PUBLISHED ARTICLE 1
V.KARTHIKEYAN PUBLISHED ARTICLE 1V.KARTHIKEYAN PUBLISHED ARTICLE 1
V.KARTHIKEYAN PUBLISHED ARTICLE 1
 
Digital image forgery detection
Digital image forgery detectionDigital image forgery detection
Digital image forgery detection
 
A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...
A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...
A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...
 
F017663344
F017663344F017663344
F017663344
 
Image fusion using nsct denoising and target extraction for visual surveillance
Image fusion using nsct denoising and target extraction for visual surveillanceImage fusion using nsct denoising and target extraction for visual surveillance
Image fusion using nsct denoising and target extraction for visual surveillance
 
Jc3416551658
Jc3416551658Jc3416551658
Jc3416551658
 
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
 
QR code decoding and Image Preprocessing
QR code decoding and Image Preprocessing QR code decoding and Image Preprocessing
QR code decoding and Image Preprocessing
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
 
AN EFFICIENT FEATURE EXTRACTION AND CLASSIFICATION OF HANDWRITTEN DIGITS USIN...
AN EFFICIENT FEATURE EXTRACTION AND CLASSIFICATION OF HANDWRITTEN DIGITS USIN...AN EFFICIENT FEATURE EXTRACTION AND CLASSIFICATION OF HANDWRITTEN DIGITS USIN...
AN EFFICIENT FEATURE EXTRACTION AND CLASSIFICATION OF HANDWRITTEN DIGITS USIN...
 
Image processing
Image processingImage processing
Image processing
 
IRJET- Image Feature Extraction using Hough Transformation Principle
IRJET- Image Feature Extraction using Hough Transformation PrincipleIRJET- Image Feature Extraction using Hough Transformation Principle
IRJET- Image Feature Extraction using Hough Transformation Principle
 
Wavelet-Based Warping Technique for Mobile Devices
Wavelet-Based Warping Technique for Mobile DevicesWavelet-Based Warping Technique for Mobile Devices
Wavelet-Based Warping Technique for Mobile Devices
 
Research Paper v2.0
Research Paper v2.0Research Paper v2.0
Research Paper v2.0
 
E017322833
E017322833E017322833
E017322833
 
A Review of Optical Character Recognition System for Recognition of Printed Text
A Review of Optical Character Recognition System for Recognition of Printed TextA Review of Optical Character Recognition System for Recognition of Printed Text
A Review of Optical Character Recognition System for Recognition of Printed Text
 
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
 
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...
 
IRJET- Image based Approach for Indian Fake Note Detection by Dark Channe...
IRJET-  	  Image based Approach for Indian Fake Note Detection by Dark Channe...IRJET-  	  Image based Approach for Indian Fake Note Detection by Dark Channe...
IRJET- Image based Approach for Indian Fake Note Detection by Dark Channe...
 
Final Paper
Final PaperFinal Paper
Final Paper
 

More from IOSR Journals

A011140104
A011140104A011140104
A011140104
IOSR Journals
 
M0111397100
M0111397100M0111397100
M0111397100
IOSR Journals
 
L011138596
L011138596L011138596
L011138596
IOSR Journals
 
K011138084
K011138084K011138084
K011138084
IOSR Journals
 
J011137479
J011137479J011137479
J011137479
IOSR Journals
 
I011136673
I011136673I011136673
I011136673
IOSR Journals
 
G011134454
G011134454G011134454
G011134454
IOSR Journals
 
H011135565
H011135565H011135565
H011135565
IOSR Journals
 
F011134043
F011134043F011134043
F011134043
IOSR Journals
 
E011133639
E011133639E011133639
E011133639
IOSR Journals
 
D011132635
D011132635D011132635
D011132635
IOSR Journals
 
C011131925
C011131925C011131925
C011131925
IOSR Journals
 
B011130918
B011130918B011130918
B011130918
IOSR Journals
 
A011130108
A011130108A011130108
A011130108
IOSR Journals
 
I011125160
I011125160I011125160
I011125160
IOSR Journals
 
H011124050
H011124050H011124050
H011124050
IOSR Journals
 
G011123539
G011123539G011123539
G011123539
IOSR Journals
 
F011123134
F011123134F011123134
F011123134
IOSR Journals
 
E011122530
E011122530E011122530
E011122530
IOSR Journals
 
D011121524
D011121524D011121524
D011121524
IOSR Journals
 

More from IOSR Journals (20)

A011140104
A011140104A011140104
A011140104
 
M0111397100
M0111397100M0111397100
M0111397100
 
L011138596
L011138596L011138596
L011138596
 
K011138084
K011138084K011138084
K011138084
 
J011137479
J011137479J011137479
J011137479
 
I011136673
I011136673I011136673
I011136673
 
G011134454
G011134454G011134454
G011134454
 
H011135565
H011135565H011135565
H011135565
 
F011134043
F011134043F011134043
F011134043
 
E011133639
E011133639E011133639
E011133639
 
D011132635
D011132635D011132635
D011132635
 
C011131925
C011131925C011131925
C011131925
 
B011130918
B011130918B011130918
B011130918
 
A011130108
A011130108A011130108
A011130108
 
I011125160
I011125160I011125160
I011125160
 
H011124050
H011124050H011124050
H011124050
 
G011123539
G011123539G011123539
G011123539
 
F011123134
F011123134F011123134
F011123134
 
E011122530
E011122530E011122530
E011122530
 
D011121524
D011121524D011121524
D011121524
 

Recently uploaded

Day 4 - Excel Automation and Data Manipulation
Day 4 - Excel Automation and Data ManipulationDay 4 - Excel Automation and Data Manipulation
Day 4 - Excel Automation and Data Manipulation
UiPathCommunity
 
Northern Engraving | Modern Metal Trim, Nameplates and Appliance Panels
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving | Modern Metal Trim, Nameplates and Appliance Panels
Northern Engraving | Modern Metal Trim, Nameplates and Appliance Panels
Northern Engraving
 
CNSCon 2024 Lightning Talk: Don’t Make Me Impersonate My Identity
CNSCon 2024 Lightning Talk: Don’t Make Me Impersonate My IdentityCNSCon 2024 Lightning Talk: Don’t Make Me Impersonate My Identity
CNSCon 2024 Lightning Talk: Don’t Make Me Impersonate My Identity
Cynthia Thomas
 
APJC Introduction to ThousandEyes Webinar
APJC Introduction to ThousandEyes WebinarAPJC Introduction to ThousandEyes Webinar
APJC Introduction to ThousandEyes Webinar
ThousandEyes
 
Automation Student Developers Session 3: Introduction to UI Automation
Automation Student Developers Session 3: Introduction to UI AutomationAutomation Student Developers Session 3: Introduction to UI Automation
Automation Student Developers Session 3: Introduction to UI Automation
UiPathCommunity
 
MySQL InnoDB Storage Engine: Deep Dive - Mydbops
MySQL InnoDB Storage Engine: Deep Dive - MydbopsMySQL InnoDB Storage Engine: Deep Dive - Mydbops
MySQL InnoDB Storage Engine: Deep Dive - Mydbops
Mydbops
 
Call Girls Kochi 💯Call Us 🔝 7426014248 🔝 Independent Kochi Escorts Service Av...
Call Girls Kochi 💯Call Us 🔝 7426014248 🔝 Independent Kochi Escorts Service Av...Call Girls Kochi 💯Call Us 🔝 7426014248 🔝 Independent Kochi Escorts Service Av...
Call Girls Kochi 💯Call Us 🔝 7426014248 🔝 Independent Kochi Escorts Service Av...
dipikamodels1
 
New ThousandEyes Product Features and Release Highlights: June 2024
New ThousandEyes Product Features and Release Highlights: June 2024New ThousandEyes Product Features and Release Highlights: June 2024
New ThousandEyes Product Features and Release Highlights: June 2024
ThousandEyes
 
Session 1 - Intro to Robotic Process Automation.pdf
Session 1 - Intro to Robotic Process Automation.pdfSession 1 - Intro to Robotic Process Automation.pdf
Session 1 - Intro to Robotic Process Automation.pdf
UiPathCommunity
 
An All-Around Benchmark of the DBaaS Market
An All-Around Benchmark of the DBaaS MarketAn All-Around Benchmark of the DBaaS Market
An All-Around Benchmark of the DBaaS Market
ScyllaDB
 
TrustArc Webinar - Your Guide for Smooth Cross-Border Data Transfers and Glob...
TrustArc Webinar - Your Guide for Smooth Cross-Border Data Transfers and Glob...TrustArc Webinar - Your Guide for Smooth Cross-Border Data Transfers and Glob...
TrustArc Webinar - Your Guide for Smooth Cross-Border Data Transfers and Glob...
TrustArc
 
Demystifying Knowledge Management through Storytelling
Demystifying Knowledge Management through StorytellingDemystifying Knowledge Management through Storytelling
Demystifying Knowledge Management through Storytelling
Enterprise Knowledge
 
Building a Semantic Layer of your Data Platform
Building a Semantic Layer of your Data PlatformBuilding a Semantic Layer of your Data Platform
Building a Semantic Layer of your Data Platform
Enterprise Knowledge
 
CTO Insights: Steering a High-Stakes Database Migration
CTO Insights: Steering a High-Stakes Database MigrationCTO Insights: Steering a High-Stakes Database Migration
CTO Insights: Steering a High-Stakes Database Migration
ScyllaDB
 
DynamoDB to ScyllaDB: Technical Comparison and the Path to Success
DynamoDB to ScyllaDB: Technical Comparison and the Path to SuccessDynamoDB to ScyllaDB: Technical Comparison and the Path to Success
DynamoDB to ScyllaDB: Technical Comparison and the Path to Success
ScyllaDB
 
Introducing BoxLang : A new JVM language for productivity and modularity!
Introducing BoxLang : A new JVM language for productivity and modularity!Introducing BoxLang : A new JVM language for productivity and modularity!
Introducing BoxLang : A new JVM language for productivity and modularity!
Ortus Solutions, Corp
 
QA or the Highway - Component Testing: Bridging the gap between frontend appl...
QA or the Highway - Component Testing: Bridging the gap between frontend appl...QA or the Highway - Component Testing: Bridging the gap between frontend appl...
QA or the Highway - Component Testing: Bridging the gap between frontend appl...
zjhamm304
 
ScyllaDB Leaps Forward with Dor Laor, CEO of ScyllaDB
ScyllaDB Leaps Forward with Dor Laor, CEO of ScyllaDBScyllaDB Leaps Forward with Dor Laor, CEO of ScyllaDB
ScyllaDB Leaps Forward with Dor Laor, CEO of ScyllaDB
ScyllaDB
 
Containers & AI - Beauty and the Beast!?!
Containers & AI - Beauty and the Beast!?!Containers & AI - Beauty and the Beast!?!
Containers & AI - Beauty and the Beast!?!
Tobias Schneck
 
Call Girls Chandigarh🔥7023059433🔥Agency Profile Escorts in Chandigarh Availab...
Call Girls Chandigarh🔥7023059433🔥Agency Profile Escorts in Chandigarh Availab...Call Girls Chandigarh🔥7023059433🔥Agency Profile Escorts in Chandigarh Availab...
Call Girls Chandigarh🔥7023059433🔥Agency Profile Escorts in Chandigarh Availab...
manji sharman06
 

Recently uploaded (20)

Day 4 - Excel Automation and Data Manipulation
Day 4 - Excel Automation and Data ManipulationDay 4 - Excel Automation and Data Manipulation
Day 4 - Excel Automation and Data Manipulation
 
Northern Engraving | Modern Metal Trim, Nameplates and Appliance Panels
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving | Modern Metal Trim, Nameplates and Appliance Panels
Northern Engraving | Modern Metal Trim, Nameplates and Appliance Panels
 
CNSCon 2024 Lightning Talk: Don’t Make Me Impersonate My Identity
CNSCon 2024 Lightning Talk: Don’t Make Me Impersonate My IdentityCNSCon 2024 Lightning Talk: Don’t Make Me Impersonate My Identity
CNSCon 2024 Lightning Talk: Don’t Make Me Impersonate My Identity
 
APJC Introduction to ThousandEyes Webinar
APJC Introduction to ThousandEyes WebinarAPJC Introduction to ThousandEyes Webinar
APJC Introduction to ThousandEyes Webinar
 
Automation Student Developers Session 3: Introduction to UI Automation
Automation Student Developers Session 3: Introduction to UI AutomationAutomation Student Developers Session 3: Introduction to UI Automation
Automation Student Developers Session 3: Introduction to UI Automation
 
MySQL InnoDB Storage Engine: Deep Dive - Mydbops
MySQL InnoDB Storage Engine: Deep Dive - MydbopsMySQL InnoDB Storage Engine: Deep Dive - Mydbops
MySQL InnoDB Storage Engine: Deep Dive - Mydbops
 
Call Girls Kochi 💯Call Us 🔝 7426014248 🔝 Independent Kochi Escorts Service Av...
Call Girls Kochi 💯Call Us 🔝 7426014248 🔝 Independent Kochi Escorts Service Av...Call Girls Kochi 💯Call Us 🔝 7426014248 🔝 Independent Kochi Escorts Service Av...
Call Girls Kochi 💯Call Us 🔝 7426014248 🔝 Independent Kochi Escorts Service Av...
 
New ThousandEyes Product Features and Release Highlights: June 2024
New ThousandEyes Product Features and Release Highlights: June 2024New ThousandEyes Product Features and Release Highlights: June 2024
New ThousandEyes Product Features and Release Highlights: June 2024
 
Session 1 - Intro to Robotic Process Automation.pdf
Session 1 - Intro to Robotic Process Automation.pdfSession 1 - Intro to Robotic Process Automation.pdf
Session 1 - Intro to Robotic Process Automation.pdf
 
An All-Around Benchmark of the DBaaS Market
An All-Around Benchmark of the DBaaS MarketAn All-Around Benchmark of the DBaaS Market
An All-Around Benchmark of the DBaaS Market
 
TrustArc Webinar - Your Guide for Smooth Cross-Border Data Transfers and Glob...
TrustArc Webinar - Your Guide for Smooth Cross-Border Data Transfers and Glob...TrustArc Webinar - Your Guide for Smooth Cross-Border Data Transfers and Glob...
TrustArc Webinar - Your Guide for Smooth Cross-Border Data Transfers and Glob...
 
Demystifying Knowledge Management through Storytelling
Demystifying Knowledge Management through StorytellingDemystifying Knowledge Management through Storytelling
Demystifying Knowledge Management through Storytelling
 
Building a Semantic Layer of your Data Platform
Building a Semantic Layer of your Data PlatformBuilding a Semantic Layer of your Data Platform
Building a Semantic Layer of your Data Platform
 
CTO Insights: Steering a High-Stakes Database Migration
CTO Insights: Steering a High-Stakes Database MigrationCTO Insights: Steering a High-Stakes Database Migration
CTO Insights: Steering a High-Stakes Database Migration
 
DynamoDB to ScyllaDB: Technical Comparison and the Path to Success
DynamoDB to ScyllaDB: Technical Comparison and the Path to SuccessDynamoDB to ScyllaDB: Technical Comparison and the Path to Success
DynamoDB to ScyllaDB: Technical Comparison and the Path to Success
 
Introducing BoxLang : A new JVM language for productivity and modularity!
Introducing BoxLang : A new JVM language for productivity and modularity!Introducing BoxLang : A new JVM language for productivity and modularity!
Introducing BoxLang : A new JVM language for productivity and modularity!
 
QA or the Highway - Component Testing: Bridging the gap between frontend appl...
QA or the Highway - Component Testing: Bridging the gap between frontend appl...QA or the Highway - Component Testing: Bridging the gap between frontend appl...
QA or the Highway - Component Testing: Bridging the gap between frontend appl...
 
ScyllaDB Leaps Forward with Dor Laor, CEO of ScyllaDB
ScyllaDB Leaps Forward with Dor Laor, CEO of ScyllaDBScyllaDB Leaps Forward with Dor Laor, CEO of ScyllaDB
ScyllaDB Leaps Forward with Dor Laor, CEO of ScyllaDB
 
Containers & AI - Beauty and the Beast!?!
Containers & AI - Beauty and the Beast!?!Containers & AI - Beauty and the Beast!?!
Containers & AI - Beauty and the Beast!?!
 
Call Girls Chandigarh🔥7023059433🔥Agency Profile Escorts in Chandigarh Availab...
Call Girls Chandigarh🔥7023059433🔥Agency Profile Escorts in Chandigarh Availab...Call Girls Chandigarh🔥7023059433🔥Agency Profile Escorts in Chandigarh Availab...
Call Girls Chandigarh🔥7023059433🔥Agency Profile Escorts in Chandigarh Availab...
 

B018110915

  • 1. IOSR Journal of Computer Engineering (IOSR-JCE) e-ISSN: 2278-0661,p-ISSN: 2278-8727, Volume 18, Issue 1, Ver. I (Jan – Feb. 2016), PP 09-15 www.iosrjournals.org DOI: 10.9790/0661-18110915 www.iosrjournals.org 9 | Page A Preprocessing Scheme for Line Detection with the Hough Transform for Mobile Robot Self-Navigation Gideon Kanji Damaryam1 , Haruna Abdu2 1 (Department of Computer Science, Federal University, Lokoja, Nigeria) 2 (Department of Computer Science, Federal University, Lokoja, Nigeria) Abstract: This paper presents the pre-processing scheme used for a vision system for a self-navigating mobile robot which relies on straight line detection using the Straight Line Hough transform. The straight line Hough transform is an Image Processing technique for detection of straight lines in an image by transforming points in the image to another image in a way that accumulates evidence that the points from the original image are constituents of a straight line type feature from the original image. The pre-processing presented includes image re-sizing, conversion to gray scale, edge detection using the Sobel edge-detection filters, and edge thinning with a newly developed method that is a slight modification of an existing method. The newly developed method has been found to yield thinned images more suitable for later stages of this work than other thinning methods. Output from the pre-processing scheme presented is used as input for the remainder of the vision-based self- navigation system. Keywords: Edge-detection, Hough transform, Image Processing, Machine vision, Pre-processing I. Introduction This paper describes an image pre-processing scheme, which transforms an image captured by a camera mounted on a mobile robot, into a representative binary image optimized for straight-line detection using the Hough transform. Straight lines are detected as part of a vision systemfor a mobile robot, which works by detecting and interpreting lines and end-point of lines to find navigationally important features.Detection of lines is detailed in [1] and determination of end-points of lines is detailed in [2]. The vision system is part of a self-navigation system intended for use by a small mobile robot within a rectilinear indoor environment such as a University faculty building. The full systemis described in [3]. Hough transforms are used for detection of features such as lines, curves and simple shapes within images. They work by transforming potential parts of a target feature in a given image topoints in a new image while accumulating measures of the likelihood that points in the new image aredue to features of the required type from the original image. When the transformation is complete, points in the new image can then be subjected to a predefined threshold so that points that are very likely to be due to the required kind of feature can be selected and the original features identified by reversing the transformation process. The Hough transform used depends on the feature to be detected. As the work that is the basis of this paper is concerned with the detection of straight lines, it prepares images for the transform called the straight line Hough transform, which transforms points, being potential components of lines, in the original image to curves in a new image. The number of curves that intersect at a particular point in the new image is a measure of the likelihood that the points from the original image whose transform curves intersect at the intersecting points in the new image, were points forming a straight line in the original image. The line is defined by values of the transform parameters which can be read off the axes of the new image. This way, lines in the original image can be detected. For brevity, in this work, the straight line Hough transform is simply referred to as the Hough transform, as is commonly done. Further information about the Hough transform is available in several publications including [4] and[3]. In the context of the Hough transform, the processing required to transform an image captured by a camera (a raw image), to an image optimized for application of the Hough transform is referred to as pre- processing. The pre-processing tasks presented in this paper include resizing of the captured image, edge- detection and edge-thinning.Image capture is first discussed also, although it is not part of pre-processing in a very strict sense. II. Capture, Resizing and Conversion to Gray Scale 2.1 Capture-Process-Navigate Cycle To achieve vision-based navigation, it is necessary to capture, and process an image, and then effect navigation on the basis of the result of the processing. This cycle is repeated until a predefined navigation programme is completely executed, or the entire navigation process is otherwise terminated.
  • 2. A Preprocessing Scheme for Line Detection with the Hough Transform for Mobile Robot… DOI: 10.9790/0661-18110915 www.iosrjournals.org 10 | Page 2.2 Capture In this work, images were captured using a single forward-facing camera. It was ensured that there was sufficient light to clearly identify separate features in the images such as walls, floors and doors. The base of the camera was set up parallel to the floor. 2.3 Resizing A standard image size was chosen to give a good compromise between usefulness of output and processing time. The reduced image size chosen was 128 x 96 pixels. When this size of image is fully processed, fairly fine details such as the two edges of a door on the side of a corridor can be extracted, yet the time for processing the image is not prohibitive. Other image sizes were tried. These included 32x32 pixels and 64x48 pixels. In both cases, the level of detail available when the image is fully processed is limited and means that higher level post-processes to interpret the results do not have adequate input. A feature such as a door that is noticeable to a human observer in an image can be reduced to a single line if the image is reduced to a 32 x 32 size, and so the door cannot be picked up as a door by the post-processing for detecting doors, for example. Fig. 1 shows the various types of results for a typical image. Fig. 1a is the original image magnified by 2.67, figure 1b is the 32 x 32 thinned version magnified by 8, figure 1c is the 64 x 64 version magnified by 4 and figure 1d is the 128 x 96 version again magnified by 2.67. The door circled in figure 1a has no chance of being picked up as a door in the 32 x 32 thinned image because it almost doesn’t appear, and in the 64 x 64 thinned image because it appears as a single line. Also, although a square aspect ratio was considered, a 4:3 aspect ratio was selected as the cameras used all captured in 4:3 ratio and changing the ratio led to unnecessary loss of information from the sides as shownin Fig. 1 below. Figure 1Effects of various image sizes (Top left) Original image magnified by 2.67(Top right) 32 x 32 thinned image magnified 8 times(Bottom left)64 x 64 thinned image magnified 4 times(Bottom right)128 x 96 thinned image magnified by 2.67 Depending on the target features, the algorithms used for detecting them, and the importance of timeliness for specific applications, other image resolutions can be used. [5] Used a 30 x 32 sized grey-scale
  • 3. A Preprocessing Scheme for Line Detection with the Hough Transform for Mobile Robot… DOI: 10.9790/0661-18110915 www.iosrjournals.org 11 | Page image as input to a neural network for the purpose of navigating a robot to avoid moving obstacles and turning into junctions. [6]Reduced512 x 480 sized images to 64 x 60 sized images in their Corridor Follower module and then usedthoseas input for the Hough transform. They then used the resulting Hough space as input to a neural network.They report that the reduction in size has no noticeable effects on the performance of the module. [7]resizedcaptured images of size512 x 512 to a 256 x 256 size – a higher resolution than the images in the current work - and usedthem to locate a docking station using an algorithm that requires up to 5 runs of the Hough transform. They report very high processing times (up to 10 minutes) however. 2.4 Intensity Determination The camera used for this work captures coloured images. These are stored as image objects that have information about the levels of primary colors (red, blue and green) at every point of the image. For edge- detection to commence, it is necessary to determine the intensity at each point. This is done by extracting the level of each of the three colours and determining the average at each point. 2.5 Image Point Indexing Points in images are labeled with identification codes as illustrated in Fig. 2. The point at the top-left position is labeled 0. Subsequent points going right are labeled with consecutive numbers until the end of the row. The labeling is continued on the next row fromthe left. Figure 2: Image points indexing III. Edge Detection Edge-detection is the first pre-processing step implemented after an image of the right size has been obtained. It yields an edge-image by plotting lines connecting points where there are significant changes in pixel intensity, and which can therefore be taken as indications of edges of features in the image[8]. An edge image, ideally, contains lines that outline features in the original image. With the intensities in the grey-scale image determined as discussed in 2.4 Intensity Determination, a filter is applied across the image, which works out for each point in the image, the possibility that the point is an edge. A threshold, selection of which is a task in itself, is then applied to select points with high possibilities of being edge points. The Sobel edge-detection filters were chosen for this work. Other edge-detection filters and techniques exist. One example is using the Laplacian edge-detection filters which havealsobeen reported to be accurate for detecting edges which are very gradual [8]. The Sobel filters were chosen for this work because not only do they provide a measure of magnitude for gradients of edges which were found to be good enough for images of the type used in this work, they also provide angles for the gradients that are used in some thinning algorithms, including the one used in this work. Thinning in this work is discussed shortly in IV. Edge Thinning. A fuller discussion on the Sobel filters is available in [8], as well as several other resources. The Sobel filters are two 3 x 3 matrices, ver M and hor M . These are applied across images. Mveris designed to find vertical edges and Mhor is designed to find horizontal edges. ver M is defined as:
  • 4. A Preprocessing Scheme for Line Detection with the Hough Transform for Mobile Robot… DOI: 10.9790/0661-18110915 www.iosrjournals.org 12 | Page               101 202 101 ver M . . . . . (1) and hor M defined as:             121 000 121 hor M . . . . . (2) The filters yield a measure of the possibility that there is a vertical and a horizontal edge, respectively, at a given point. These measures are called gradient magnitudes. The two gradient magnitudes, ver gm and hor gm , are obtained by convolution of the respective filters with the image I : IMgm verver * . . . . . (3) IMgm horhor * . . . . . (4) The two are then summed to give an overall gradient magnitude, 𝑔𝑚 for the point: horver gmgmgm  . . . . . (5) The Sobel filters also provide an estimate of the angle,  of the gradient. This is simply the arc tangent of the horizontal gradient magnitude divided by the vertical gradient magnitude:           ver hor gm gm1 tan . . . . . (6) 3.1 Edge Threshold determination Once gradient magnitudes have been determined, the next stage in edge-detection is deciding from the gradient magnitudes, which points are edge points and which ones are not. This involves application of a threshold. This work has developed a scheme where, rather than assign a fixed threshold for determining edges, a target is provided of the number of edge points required. The following algorithmis then used to work out what threshold will result in getting a number of edges equal to, or a little more than that specified: 1. Determine maximum gradient magnitude, M , fromthe array of gradient magnitudes GM 2. Determine minimum gradient magnitude, m , from the array of gradient magnitudes GM 3. Determine range of gradient magnitudes, R , using 1 mMR 4. Determine target number of non-edge points, 'N , as the difference between total number of points, N , and target number of peaks, 'T , i.e. '' TNN  5. Determine the number of elements of GM having value a for each a where m ≤ Ma  and store each as a G 6. Initialize a counting variable i to , and set i S , the𝑀𝑡ℎ cumulative sum, to m G 7. Reduce i by 1 8. Add previous cumulative group count to current group count to get current cumulative group count, i.e.,𝑆𝑖 = 𝑆𝑖+1 + 𝐺𝑖 9. If current cumulative group sum, i S , is equal to or greater than target number of peaks,𝑁, do 10, else go back to 7 10. Set threshold to the current count and stop The gradient magnitudes determined by application of the Sobel edge-detection filters, provides input for this algorithm. M
  • 5. A Preprocessing Scheme for Line Detection with the Hough Transform for Mobile Robot… DOI: 10.9790/0661-18110915 www.iosrjournals.org 13 | Page 3.2 Sample Edge Detection Result Sample results are shown in Fig. 3. Fig. 3a is a typical image, and Fig. 3b is the same image after it has been converted to grey-scale and Sobel edge detection has been applied to it. Figure 3a: Sample Image Figure 3b: Sample Image after Sobel Edge Detection Figure 3: Sample Sobel edge detection results IV. Edge Thinning Edge-detection often yields edges several pixels thick. This can make further processing of the image unnecessarily processing time and memory consuming, and “distracts” feature detection processes from important but salient features of the image. The objective of edge thinning is to reduce edges to unit thickness without losing any information about the connectedness of edges or introducing any form of distortion to the image. Several thinning algorithms exist. The most popular method is the non-maximum suppression method. This method works by removing edge responses that are not maximal across each section of the edge direction in their local neighbourhood. However, the result of this method is still under-thinned in some places and removes real edges in other places [9]. [9]have proposed another method based on comparing gradient magnitudes within 3 x 3 neighbourhoods. It produces more accurate results than the non-maximum suppression method, and also has the added advantage of minimizing the use of the edge direction, which introduces a lot of arc tangent calculations. This work found that the method of [9]produces very good thin edges except that sometimes it loses information about edges that are significant in the context of the original image, and that would also be helpful for robot navigation. A slight modification has been proposed to step 1 of their method that has solved this problem. Steps 0 and 1 of their method follows: Step 0: Select an unprocessed edge point Step 1: Determine number of edge points, n , in the immediate neighbourhood of the current point. If 2n , set current point to a non-edge point, i.e., consider as noise else, go to step 2. The modification to step 1 is: Step 1: Determine number of edge points, n , in the immediate neighbourhood of the current point. If 0n , set current point to a non-edge point. If 1n , then find out the number of neighbouring edge points, nn , of the 1 neighbour. If 1nn , the current edge point is maintained otherwise it is made a non-edge point. If 2n , maintain as edge point If 2n , go to step 2. Further processing is done exactly according to step 2 and further steps described in [9]. 4.1 Sample Edge Thinning Result Fig. 4 shows theComparison of the results of the thinning method of[9]and the modified version of it used in this work. Fig. 4a is a sample image. Its edge image is shown in Fig. 4b after the application of the Sobel operators. The results of the algorithm of [9]are shown in Fig.4c and the results of the modification by the
  • 6. A Preprocessing Scheme for Line Detection with the Hough Transform for Mobile Robot… DOI: 10.9790/0661-18110915 www.iosrjournals.org 14 | Page current work is shown in Fig. 4d. Although the method of[9]results in a cleaner result, it loses important lines such as the door border highlighted in Fig. 4b. Figure 4a: Sample Image Figure 4b: Sample image after application of the Sobel Operator Figure 4c: Sample image thinned with Figure 4d: Sample image thinned with modification the method of [9] to the method of [9] by this work Figure 4 Comparison of the results of the thinning method of [9]and the modified version of it used in this work V. Conclusion In conclusion, this paper presented the pre-processing scheme used for a vision system for a self- navigating mobile robot which relies on straight line detection using the straight line Hough transform, as part of a bigger process of mobile robot self-navigation based on visual data. The scheme starts with image capture by a camera mounted on a mobile robot andends with a representative binary image optimized for straight-line detection using the Hough transform.It includes image re-sizing, conversion to gray scale, edge detection using the Sobel edge-detection filters, and edge thinning with a newly developed method that is a slight modification of the method of [9]. The newly developed thinning method has been found to yield thinned images more suitable for later stages of the capture-process-navigate cycle of this work. It enabled detection of more navigationally important features at later stages of the overall vision system, and is more accurate than other thinning methods such as non-maximal suppression commonly used, while minimizing the use of processor intensive functions such as arctangent calculations. It relies on the gradient magnitudes and angles provided by edge-detection using the Sobel filters. Other edge-detection methods, for example using the Laplacian edge-detection filters, do not provide both of these. Threshold for determination of edges after application of the Sobel filters, was chosen automatically by targeting a fixed number of edges. This works for this application as images are generally similar. This would not work for applications where images varied a lot. The size chosen for images in the schema presented is also a direct result of the nature of the specific application in question. Other applications would most likely do better with other image sizes. Output from the pre-processing scheme presented provides input for the remainder of the vision-based self-navigation system for a mobile robot, which works by detecting and interpreting lines to find navigationally important features.
  • 7. A Preprocessing Scheme for Line Detection with the Hough Transform for Mobile Robot… DOI: 10.9790/0661-18110915 www.iosrjournals.org 15 | Page Acknowledgement This paper discusses work that was funded by the School of Engineering of the Robert Gordon University, Aberdeen in the United Kingdom, and was done in their lab using their robot. References [1]. G. K. Damaryam,A Hough Transform Implementation for Line Detection for a Mobile Robot Self-Navigation System, International Organisation for Scientific Research – Journal of Computer Engineering, 17(6), 2015 [2]. G. K. Damaryam, A Method to Determine End-Points of Straight Lines Detected using the Hough Transform, International Journal of Engineering Research and Applications, 6(1), 2016 [3] G. K. Damaryam, Visions Systems for A Mobile Robot basedon Line Detection using the Hough Transform and Artificial Neural Networks, doctoral diss., Robert Gordon University, Aberdeen, United Kingdom, 2008. [4] P. Hough, MethodandMeans for Recognising Complex Patterns, United State of America Patent 3069654, 1962. [5] R. M.Inigoand R. E. Torres, Mobile Robot Navigation with Vision Based Neural Networks, Proc. SPIE 2352, Mobile Robots IX, 2353, 68-79, 1995. [6] X. Yun, K Latt and G. J. Scott, Mobile Robot Localization using the Hough Transform and Neural Networks. Proc.,IEEE International Symposium on Intelligent Control, Gaithersburg, MD, 1998, 393 – 400. [7] D. L. Vaughn and R. C. Arkin,Workstation Recognition using a Constrained Edge-based Hough Transform for Mobile Robot Navigation, 1990. [8] V. F. Leavers,Shape Detection in Computer Vision Usingthe Hough Transform(London: Springer-Verlag, 1992). [9] J. Park and H. Chen and S. T. Huang, A new gray level edge thinning method. Proc., ISCA 13th International Conference, Computer Applications in Industry andEngineering, Honolulu, HI, USA, 2000.
  翻译: