How To Make Image Segmentation Work With C#
This tutorial shows how to use colors for image segmentation applied in C#. I explain the basics of applying it in different color spaces.
Filter by Category
This tutorial shows how to use colors for image segmentation applied in C#. I explain the basics of applying it in different color spaces.
This color image smoothing and sharpening tutorial shows how to apply convolution for blurring and sharpening images with C#.
This guide shows how to apply color histogram equalization with iterative equalization of the image by using nth root or nth power.
Color balancing is one of the processes we use to adjust images, which are either on the weaker or heavier side for any of the color channels.
This guide shows how to apply image tone corrections for flat, dark and light images. The purpose of it is to adjust brightness and contrast.
Color slicing is a color image processing technique, which only shows colors in a certain color space making objects stand out.
False color or pseudocolor image processing coupled with intensity slicing is useful for emphasizing shapes that might be hidden to our eyes.
This guide shows how to convert RGB to HSI image data and also how to convert it back from HSI to RGB to make it displayable on screen.
Adaptive median filter is much more effective at removing impulse noise, also known as salt and pepper noise, than traditional median filter.
Adaptive local noise reduction filters are useful for processing images that have too much noise to deal with with other simpler filters.
Image segmentation is a process, which partitions an image into regions. In essence, if we wanted to make a certain shape stand out from the image, we could use segmentation. However, it depends on what kind of result we’re seeking.
I made a demonstration project for this guide, that segments an image and asigns black or white pixel colors in the resulting image. We also call this type of segmentation binary segmentation.
Like most of color image processing techniques, we can modify values in different color models, like HSI and RGB. So far, we’ve covered a few tutorials where we had opportunities converting from one model to another. This also applies for this guide.
This, however, is the first tutorial where we would want to modify values in other HSI components than intensity component. When we want to segment based on color, we would want to do it on individual sub-image. Therefore, first step to doing it this way we need to convert our image from RGB to HSI color model.
We’re only going to cover doing it this way briefly, because the following code demonstrates working with RGB values only. So, in case you’re not familiar what happens when we convert image values from RGB to HSI color model, we basically decouple color and intensity information.
This means, we could modify intensities without corrupting color information or vice versa. Furthermore, working with segmentation, we would want to modify hue component, where pure color information is stored. We could also adjust saturation component to further segment regions of interest in hue component.
When we’re using colors for image segmentation, we generally get better results when we do in RGB color space. This is the reason why we’ll implement it in code this way as well.
Key component to this process is getting a sample color values, around which we’ll create a range. Based on this range, we’ll segment colors wether they fall inside or outside it by color coding pixels in black or white.
We could get this values by simply selecting a pixel or we could select a region and calculate the average color values in that region and use that as our sample.
Next step will be setting the range and the simplest way is to measure Euclidean distance from our sample color. We test each and every single pixel in the image wether its color is inside that range or not.
To give the formula above a little bit more context, z and a represent a vectors of color values, z being the color which we measure the distance from a, our sample color.
If we imagine RGB color space as a three dimensional space and our sample color as a point in it. By measuring the Euclidean distance around it, we basically envelop colors inside the range into a sphere.
public static Bitmap ImageSegment(this Bitmap image, int x, int y)
{
int w = image.Width;
int h = image.Height;
BitmapData image_data = image.LockBits(
new Rectangle(0, 0, w, h),
ImageLockMode.ReadOnly,
PixelFormat.Format24bppRgb);
int bytes = image_data.Stride * image_data.Height;
byte[] buffer = new byte[bytes];
byte[] result = new byte[bytes];
Marshal.Copy(image_data.Scan0, buffer, 0, bytes);
image.UnlockBits(image_data);
//limit the color range for segmentation
int d0 = 30;
int sample_position = x * 3 + y * image_data.Stride;
for (int i = 0; i < bytes - 3; i+=3)
{
double euclidean = 0;
for (int c = 0; c < 3; c++)
{
euclidean += Math.Pow(buffer[i + c] - buffer[sample_position + c], 2);
}
euclidean = Math.Sqrt(euclidean);
for (int c = 0; c < 3; c++)
{
result[i + c] = (byte)(euclidean > d0 ? 0 : 255);
}
}
Bitmap res_img = new Bitmap(w, h);
BitmapData res_data = res_img.LockBits(
new Rectangle(0, 0, w, h),
ImageLockMode.WriteOnly,
PixelFormat.Format24bppRgb);
Marshal.Copy(result, 0, res_data.Scan0, bytes);
res_img.UnlockBits(res_data);
return res_img;
}
I hope this tutorial helped you understanding image segmentation better or if the code I provided was useful to you.
You can also download the entire demonstration project and try it out yourself.