How To Make Point Detection Algorithm With C#
Point detection is a segmentation technique in image processing, we can use to get the position of point objects in the image.
Filter by Category
Point detection is a segmentation technique in image processing, we can use to get the position of point objects in the image.
Texture segmentation is a customizable morphological process, with which we can find boundaries between regions based on their texture content
Granulometry is a grayscale morphological operation in image processing for estimating distribution of different sized particles.
Top hat transformation is a grayscale morphological operation in image processing, we can use for extraction of certain objects in the image.
Morphological gradient is a grayscale morphological operation in image processing, which emphasized boundaries and supresses homogenous areas
Morphological smoothing is an image processing technique, which includes grayscale erosion and dilation, and grayscale opening and closing
Automatic algorithm for filling holes is a sequence of morphological operations in reconstruction branch, which fills all holes in the image.
Opening by reconstruction is a morphological operation in image processing for removing small objects and recovering shape accurately after.
Geodesic dilation and erosion are fundamental morphological reconstruction algorithms which yield the same result if left to converge.
Pruning in image processing is a morphological operation for removing spurs. It serves mainly as a post processing technique for cleaning up.
Isolated point detection is a segmentation technique in image processing, which is useful for locating single points or pixels. Furthermore, it’s the simplest process for understanding segmentation fundamentals.
First of all, let’s talk about what segmentation fundamentals are. Regions, which we want to segment, either points, lines or areas must be disjoint. In our case here, we’re only segmenting single points, which can be useful for processing x-ray images of materials.
Another thing is, pixels inside a region need to satisfy certain properties. Therefore, they need to be connected in some predefined sense. However, as I mentioned before, we don’t need to worry about this in this tutorial to much since we’re dealing with point detection.
In general though, segmentation we’re going to use in this and other guides for lines and shapes, is based on detecting sharp local changes in intensities. In other words, the bigger the difference between two neighboring pixels is, more emphasized they will be.
We detect points, or any abrupt local changes of intensities, by using derivatives. Now, we’re not going in to deep about the math behind this but I feel like we should still mention it.
Mainly, we’ll use first order and second order derivatives. Basically, when we use first order derivatives, edges will be thicker. And in case we want to preserve the details, we should use second order derivatives.
So in our particular case with point detection, we’ll need to use second order derivatives for catching all the points.
We’re going to use convolution to process our images. In case you’re not familiar with what this is, I’m going to describe it in a nutshell here.
Basically, we’ll need to use a filter kernel, which is a small matrix of predefined values, and lay it on top of our image. To get the resulting pixel values, we’re going to multiply each kernel and pixel values that overlap and sum all together.
We repeat this process by sliding the kernel across the image one pixel at a time until we form the complete resulting image.
We’re going to use Laplacian filter for this example. However, we need to be mindful of resulting values that fall out of range of displayable values. Reason for this is, because our filter has a negative value in the center, some sums may be negative.
The way I dealt with this problem is by limiting resulting values. Those which were negative I set to 0 and those that were above 255 I set to 255. Otherwise the numbers lap around the byte range in C# and make a messy unusable image.
For our final result, I also used thresholding to make a binary image. Therefore, I set the threshold at 200 pixels, which means that intensities above it will be white and those below will be black.
The following function demonstrates how to apply convolution process for the entire image. However, we can’t position filters center at the border to compute values for those pixels. Therefore, we’ll get a resulting image with a black border around it.
We can counter this effect by using zero padding, which basically adds a black border to input image. We can trim this border on the resulting image. This way, we’ll end up with a resulting image that is the same size as input one.
public static byte[] Convolute(this byte[] buffer, BitmapData image_data, int[,] filter)
{
byte[] result = new byte[buffer.Length];
int ho = (filter.GetLength(0) - 1) / 2;
int vo = (filter.GetLength(1) - 1) / 2;
for (int x = ho; x < image_data.Width - ho; x++)
{
for (int y = vo; y < image_data.Height - vo; y++)
{
int position = x * 3 + y * image_data.Stride;
int sum = 0;
for (int i = -ho; i <= ho; i++)
{
for (int j = -vo; j <= vo; j++)
{
int filter_position = position + i * 3 + j * image_data.Stride;
sum += (buffer[filter_position] * filter[i + ho, j + vo]);
}
}
for (int c = 0; c < 3; c++)
{
if (sum > 255)
{
sum = 255;
}
else if (sum < 0)
{
sum = 0;
}
result[position + c] = (byte)(sum);
}
}
}
return result;
}
And here is the function that applies the entire process that we described above.
public static Bitmap PointDetect(this Bitmap image)
{
int w = image.Width;
int h = image.Height;
BitmapData image_data = image.LockBits(
new Rectangle(0, 0, w, h),
ImageLockMode.ReadOnly,
PixelFormat.Format24bppRgb);
int bytes = image_data.Stride * image_data.Height;
byte[] buffer = new byte[bytes];
byte[] result = new byte[bytes];
Marshal.Copy(image_data.Scan0, buffer, 0, bytes);
image.UnlockBits(image_data);
//apply laplacian
result = buffer.Convolute(image_data, Filters.Laplacian);
//thresholding
for (int i = 0; i < bytes; i++)
{
result[i] = (byte)(result[i] < 200 ? 0 : 255);
}
Bitmap res_img = new Bitmap(w, h);
BitmapData res_data = res_img.LockBits(
new Rectangle(0, 0, w, h),
ImageLockMode.WriteOnly,
PixelFormat.Format24bppRgb);
Marshal.Copy(result, 0, res_data.Scan0, bytes);
res_img.UnlockBits(res_data);
return res_img;
}
I hope this tutorial was helpful in getting a better understanding how point detection works. In case you want to learn more about image processing you can check out my other posts.
You can also download the demo project and try it out yourself.