How To Make Canny Edge Detection Algorithm With C#
Canny edge detection process is an edge detection based segmentation operation in image processing for accurately extracting edges.
Filter by Category
Filter by Author
Canny edge detection process is an edge detection based segmentation operation in image processing for accurately extracting edges.
Posted by Andraz Krzisnik
Marr hildreth edge detection process is one of the earliest sophisticated edge detection based segmentation operations in image processing.
Posted by Andraz Krzisnik
Edge detection is a segmentation technique in image processing for extracting object boundaries based on abrupt intensity changes.
Posted by Andraz Krzisnik
Line detection is a segmentation technique in image processing, with which we can extract thin lines with respect to each filter kernel.
Posted by Andraz Krzisnik
Point detection is a segmentation technique in image processing, we can use to get the position of point objects in the image.
Posted by Andraz Krzisnik
Texture segmentation is a customizable morphological process, with which we can find boundaries between regions based on their texture content
Posted by Andraz Krzisnik
Granulometry is a grayscale morphological operation in image processing for estimating distribution of different sized particles.
Posted by Andraz Krzisnik
Top hat transformation is a grayscale morphological operation in image processing, we can use for extraction of certain objects in the image.
Posted by Andraz Krzisnik
Morphological gradient is a grayscale morphological operation in image processing, which emphasized boundaries and supresses homogenous areas
Posted by Andraz Krzisnik
Morphological smoothing is an image processing technique, which includes grayscale erosion and dilation, and grayscale opening and closing
Posted by Andraz Krzisnik
Canny edge detection process is an edge detection based segmentation operation in image processing for accurately extracting edges.
Canny edge detection operation is one of the most complex segmentation processes for extracting edges of objects. Furthermore, it performs better than any other edge detection based processes we covered thus far.
There are three main objectives we want to satisfy. Firstly, it needs to have low error rate, meaning that it should detect all edges and no spurs. Secondly, it needs to find true edges, which lie in between the blurry transition between objects and background. And lastly, it should extract 1 pixel thin edges by suppressing local non-maxima.
This process consists of a sequence of operations, some of them are unique and some we already covered in previous posts. However, the main part of it is spatial convolution.
In case you’re not familiar what convolution is, we’re going to go through basics of it here. In short, convolution is a linear process where we use a filter kernel to compute output pixel values.
What is a filter kernel?
It’s a small matrix of predefined values and we use it by placing it on top of our input image. In order to get the resulting pixel value, we need to multiply overlapping values and sum all products together. So, to render the whole image, we need to slide the kernel pixel by pixel, calculating each output pixel separately.
We can summarize the whole process into 4 steps. But before we get into the processing sequence, we should normalize pixel values to range between 0 and 1.
Firstly, we need to apply Gaussian blur to the input image. Secondly, we take that resulting image and compute gradient magnitude values by using Sobel operators.
Next step is suppressing local non-maxima, so we can get that thin edge output. It’s important to take into account the direction of the edges in order to get desirable results. Furthermore, we can compute the directions with gradient results.
So basically, once we know the edge orientation at each point, we compare the 2 neighboring values whether they’re larger than the center one. In case any of them is larger, we set the resulting value at the center to 0, otherwise, we leave it as it is.
And finally, to obtain the resulting image from this whole process, we need to apply hysteresis thresholding.
What kind of thresholding is that?
Don’t worry, we just need to show pixels, which have intensity values between 2 limits – lower and upper. In other words, we let through a sliver of intensity levels. This is also the first post we mentioned this type of thresholding.
I recommend you to use a ratio between the limits to be somewhere in between 2:1 and 3:1 for optimal results. It’s obviously going to depend on what kind of image you’re processing, but you can usually get good results with that.
I used the same function for convolution as in Marr-Hildreth edge detection tutorial, so I won’t post it here again. In essence, it’s adapted for calculating double type variables, which includes the filter kernel.
I also used zero padding on the input image, because Gaussian filter gets noticably large and cuts off a sizable border. So for this reason I added black pixels around the image so we get output image the same size as input image.
public static Bitmap CannyEdgeDetect(this Bitmap image)
{
int w = image.Width;
int h = image.Height;
double sigma = Math.Min(w, h) * 0.005;
int kernel_dim = (int)Math.Ceiling(sigma * 6);
if (kernel_dim % 2 == 0)
{
kernel_dim++;
}
int off = (kernel_dim - 1) / 2;
Bitmap padded = image.Pad(off);
w = padded.Width;
h = padded.Height;
BitmapData image_data = padded.LockBits(
new Rectangle(0, 0, w, h),
ImageLockMode.ReadOnly,
PixelFormat.Format24bppRgb);
int bytes = image_data.Stride * image_data.Height;
byte[] buffer = new byte[bytes];
Marshal.Copy(image_data.Scan0, buffer, 0, bytes);
padded.UnlockBits(image_data);
double[] converted = buffer.Select(x => (double)x).ToArray();
double max = 0;
for (int i = 0; i < bytes; i++)
{
max = Math.Max(max, converted[i]);
}
converted = converted.Select(x => x / max).ToArray();
//Gaussian blur
converted = converted.Convolute(image_data, GaussianKernel(sigma));
//Sobel
double[] gx = converted.Convolute(image_data, Filters.SobelHorizontal);
double[] gy = converted.Convolute(image_data, Filters.SobelVertical);
for (int i = 0; i < bytes; i++)
{
double magnitude = Math.Sqrt(Math.Pow(gx[i], 2) + Math.Pow(gy[i], 2));
converted[i] = (magnitude > 1 ? 1 : magnitude);
}
//Finding local maxima
double[] result = new double[bytes];
for (int x = 1; x < w - 1; x++)
{
for (int y = 1; y < h - 1; y++)
{
int position = x * 3 + y * image_data.Stride;
bool maxima = true;
double angle = Math.Atan2(gy[position], gx[position]) * (180 / Math.PI);
for (int i = -1; i <= 1; i++)
{
for (int j = -1; j <= 1; j++)
{
int neighbor1 = position + i * 3 + j * image_data.Stride;
int neighbor2 = position - i * 3 - j * image_data.Stride;
double neighbor_angle = Math.Atan2(j, i) * (180 / Math.PI);
if (neighbor_angle + 22.5 >= angle && neighbor_angle - 22.5 <= angle && neighbor1 != neighbor2)
{
if (converted[position] < converted[neighbor1] || converted[position] < converted[neighbor2])
{
maxima = false;
}
}
}
}
if (maxima)
{
for (int c = 0; c < 3; c++)
{
result[position + c] = converted[position];
}
}
}
}
byte[] byte_res = new byte[bytes];
//hysteresis thresholding
for (int i = 0; i < bytes; i++)
{
double threshold = 0.25;
byte_res[i] = (byte)((result[i] > threshold && result[i] < 3 * threshold) ? 255 : 0);
}
Bitmap res_img = new Bitmap(w, h);
BitmapData res_data = res_img.LockBits(
new Rectangle(0, 0, w, h),
ImageLockMode.WriteOnly,
PixelFormat.Format24bppRgb);
Marshal.Copy(byte_res, 0, res_data.Scan0, bytes);
res_img.UnlockBits(res_data);
return res_img;
}
I hope this tutorial helped you understand how Canny edge detection process works.
You can also download the demo project and try it out yourself. I haven’t pasted all the code in the post, so you’ll be able to see how other functions work with the code above.
Ideal highpass filter is used to filter images in the frequency domain. It attenuates low frequencies and keeps high frequencies.
Edge detection is a segmentation technique in image processing for extracting object boundaries based on abrupt intensity changes.