# jupyter notebook/python code

2. Implement the Python codes snippets given in the chapter below in a Jupyter Notebook .

ATTACHED FILE(S)

22
Detecting Edges and Applying

Image Filters
In this chapter, we are going to see how to apply cool visual effects to images. We will learn
how to use fundamental image processing operators, discuss edge detection, and see how
we can use image filters to apply various effects to photos.

By the end of this chapter, you will know:

What 2D convolution is, and how to use it
How to blur an image
How to detect edges in an image
How to apply motion blur to an image
How to sharpen and emboss an image
How to erode and dilate an image
How to create a vignette filter
How to enhance image contrast

Detecting Edges and Applying Image Filters Chapter 2

[ 38 ]

2D convolution
Convolution is a fundamental operation in image processing. We basically apply a
mathematical operator to each pixel, and change its value in some way. To apply this
mathematical operator, we use another matrix called a kernel. The kernel is usually much
smaller in size than the input image. For each pixel in the image, we take the kernel and
place it on top so that the center of the kernel coincides with the pixel under consideration.
We then multiply each value in the kernel matrix with the corresponding values in the
image, and then sum it up. This is the new value that will be applied to this position in the
output image.

Here, the kernel is called the image filter and the process of applying this kernel to the
given image is called image filtering. The output obtained after applying the kernel to the
image is called the filtered image. Depending on the values in the kernel, it performs
different functions such as blurring, detecting edges, and so on. The following figure should

Let’s start with the simplest case, which is the identity kernel. This kernel doesn’t really
change the input image. If we consider a 3×3 identity kernel, it looks something like the
following:

Detecting Edges and Applying Image Filters Chapter 2

[ 39 ]

Blurring
Blurring refers to averaging the pixel values within a neighborhood. This is also called a
low pass filter. A low pass filter is a filter that allows low frequencies, and blocks higher
frequencies. Now, the next question that comes to our mind is: what does frequency mean
in an image? Well, in this context, frequency refers to the rate of change of pixel values. So
we can say that the sharp edges would be high-frequency content because the pixel values
change rapidly in that region. Going by that logic, plain areas would be low-frequency
content. Going by this definition, a low pass filter would try to smooth the edges.

A simple way to build a low pass filter is by uniformly averaging the values in the
neighborhood of a pixel. We can choose the size of the kernel depending on how much we
want to smooth the image, and it will correspondingly have different effects. If you choose a
bigger size, then you will be averaging over a larger area. This tends to increase the
smoothing effect. Let’s see what a 3×3 low pass filter kernel looks like:

We are dividing the matrix by 9 because we want the values to sum up to one. This is called
normalization, and it’s important because we don’t want to artificially increase the intensity
value at that pixel’s location. So, you should normalize the kernel before applying it to an
image. Normalization is a really important concept, and it is used in a variety of scenarios,
so you should read a couple of tutorials online to get a good grasp on it.

Here is the code to apply this low pass filter to an image:

Detecting Edges and Applying Image Filters Chapter 2

[ 40 ]

If you run the preceding code, you will see something like this:

Size of the kernel versus blurriness
In the preceding code, we are generating different kernels in the code, which are

, , and . We use the function, , to
apply these kernels to the input image. If you look at the images carefully, you can see that
they keep getting blurrier as we increase the kernel size. The reason for this is because when
we increase the kernel size, we are averaging over a larger area. This tends to have a larger
blurring effect.

An alternative way of doing this would be by using the OpenCV function, . If you
don’t want to generate the kernels yourself, you can just use this function directly. We can
call it using the following line of code:

Detecting Edges and Applying Image Filters Chapter 2

[ 41 ]

This will apply the 3×3 kernel to the input and give you the output directly.

Motion blur
When we apply the motion blurring effect, it will look like you captured the picture while
moving in a particular direction. For example, you can make an image look like it was
captured from a moving car.

The input and output images will look like the following ones:

Following is the code to achieve this motion blurring effect:

Detecting Edges and Applying Image Filters Chapter 2

[ 42 ]

Under the hood
We are reading the image as usual. We are then constructing a motion kernel. A
motion blur kernel averages the pixel values in a particular direction. It’s like a directional
low pass filter. A 3×3 horizontal motion-blurring kernel would look this:

This will blur the image in a horizontal direction. You can pick any direction and it will
work accordingly. The amount of blurring will depend on the size of the kernel. So, if you
want to make the image blurrier, just pick a bigger size for the kernel. To see the full effect,
we have taken a 15×15 kernel in the preceding code. We then use to apply this
kernel to the input image, to obtain the motion-blurred output.

Sharpening
Applying the sharpening filter will sharpen the edges in the image. This filter is very useful
when we want to enhance the edges of an image that’s not crisp enough. Here are some
images to give you an idea of what the image sharpening process looks like:

Detecting Edges and Applying Image Filters Chapter 2

[ 43 ]

As you can see in the preceding figure, the level of sharpening depends on the type of
kernel we use. We have a lot of freedom to customize the kernel here, and each kernel will
give you a different kind of sharpening. To just sharpen an image, as we are doing in the
top-right image in the preceding picture, we would use a kernel like this:

If we want to do excessive sharpening, as in the bottom-left image, we would use the
following kernel:

But the problem with these two kernels is that the output image looks artificially enhanced.
If we want our images to look more natural, we would use an edge enhancement filter. The
underlying concept remains the same, but we use an approximate Gaussian kernel to build
this filter. It will help us smooth the image when we enhance the edges, thus making the
image look more natural.

Here is the code to achieve the effects applied in the preceding screenshot:

Detecting Edges and Applying Image Filters Chapter 2

[ 44 ]

If you noticed, in the preceding code, we didn’t divide the first two kernels by a
normalizing factor. The reason for this is that the values inside the kernel already sum up to
one, so we are implicitly dividing the matrices by one.

Understanding the pattern
You must have noticed a common pattern in the image filtering code examples. We build a
kernel and then use to get the desired output. That’s exactly what’s happening in
this code example as well! You can play around with the values inside the kernel and see if
you can get different visual effects. Make sure that you normalize the kernel before
applying it, or else the image will look too bright because you are artificially increasing the
pixel values in the image.

Embossing
An embossing filter will take an image and convert it to an embossed image. We basically
take each pixel, and replace it with a shadow or a highlight. Let’s say we are dealing with a
relatively plain region in the image. Here, we need to replace it with a plain gray color
because there’s not much information there. If there is a lot of contrast in a particular region,
we will replace it with a white pixel (highlight), or a dark pixel (shadow), depending on the
direction in which we are embossing.

Detecting Edges and Applying Image Filters Chapter 2

[ 45 ]

This is what it will look like:

Let’s take a look at the code and see how to do this:

Detecting Edges and Applying Image Filters Chapter 2

[ 46 ]

If you run the preceding code, you will see that the output images are embossed. As we can
see from the preceding kernels, we are just replacing the current pixel value with the
difference of the neighboring pixel values in a particular direction. The embossing effect is
achieved by offsetting all the pixel values in the image by . This operation adds the

Edge detection
The process of edge detection involves detecting sharp edges in the image, and producing a
binary image as the output. Typically, we draw white lines on a black background to
indicate those edges. We can think of edge detection as a high pass filtering operation. A
high pass filter allows high-frequency content to pass through and blocks the low-frequency
content. As we discussed earlier, edges are high-frequency content. In edge detection, we
want to retain these edges and discard everything else. Hence, we should build a kernel
that is the equivalent of a high pass filter.

Let’s start with a simple edge detection filter known as the filter. Since edges can
occur in both horizontal and vertical directions, the filter is composed of the
following two kernels:

Detecting Edges and Applying Image Filters Chapter 2

[ 47 ]

The kernel on the left detects horizontal edges and the kernel on the right detects vertical
edges. OpenCV provides a function to directly apply the filter to a given image. Here
is the code to use Sobel filters to detect edges:

cv2.CV_64F.

In the case of 8-bit input images, it will result in truncated derivatives, so depth
value can be used instead. In case edges are not that well-defined the value
of kernel can be adjusted, minor to obtain thinner edges and major for the opposite purpose.

The output will look something like the following:

In the preceding figure, the image in the middle is the output of a horizontal edge detector,
and the image on the right is the vertical edge detector. As we can see here, the filter
detects edges in either a horizontal or vertical direction and it doesn’t give us a holistic view
of all the edges. To overcome this, we can use the filter. The advantage of using
this filter is that it uses a double derivative in both directions. You can call the function
using the following line:

Detecting Edges and Applying Image Filters Chapter 2

[ 48 ]

The output will look something like the following screenshot:

Even though the kernel worked well in this case, it doesn’t always work well. It
gives rise to a lot of noise in the output, as shown in the following screenshot. This is where
the edge detector comes in handy:

Detecting Edges and Applying Image Filters Chapter 2

[ 49 ]

As we can see in the preceding images, the kernel gives rise to a noisy output,
which is not exactly useful. To overcome this problem, we use the edge detector. To
use the edge detector, we can use the following function:

As we can see, the quality of the edge detector is much better. It takes two numbers
as arguments to indicate the thresholds. The second argument is called the low threshold
value, and the third argument is called the high threshold value. If the gradient value is
beyond the high threshold value, it is marked as a strong edge. The edge detector
starts tracking the edge from this point and continues the process until the gradient value
falls below the low threshold value. As you increase these thresholds, the weaker edges will
be ignored. The output image will be cleaner and sparser. You can play around with the
thresholds and see what happens as you increase or decrease their values. The overall

.

Erosion and dilation
Erosion and dilation are morphological image processing operations. Morphological image
processing basically deals with modifying geometric structures in the image. These
operations are primarily defined for binary images, but we can also use them on grayscale
images. Erosion basically strips out the outermost layer of pixels in a structure, whereas
dilation adds an extra layer of pixels to a structure.

Let’s see what these operations look like:

Following is the code to achieve this:

Detecting Edges and Applying Image Filters Chapter 2

[ 50 ]

Afterthought
OpenCV provides functions to directly erode and dilate an image. They are called erode
and dilate, respectively. The interesting thing to note is the third argument in these two
functions. The number of iterations will determine how much you want to erode/dilate a
given image. It basically applies the operation successively to the resultant image. You can
take a sample image and play around with this parameter to see what the results look like.

Creating a vignette filter
Using all the information we have, let’s see if we can create a nice vignette filter. The output
will look something like the following:

Detecting Edges and Applying Image Filters Chapter 2

[ 51 ]

Here is the code to achieve this effect:

What’s happening underneath?
The vignette filter basically focuses the brightness on a particular part of the image and the
other parts look faded. In order to achieve this, we need to filter out each channel in the
image using a Gaussian kernel. OpenCV provides a function to do this, which is called

. We need to build a 2D kernel whose size matches the size of the
image. The second parameter of the function, , is interesting. It is the
standard deviation of the Gaussian, and it controls the radius of the bright central region.
You can play around with this parameter and see how it affects the output.

Once we build the 2D kernel, we need to build a mask by normalizing this kernel and
scaling it up, as shown in the following line:

This is an important step because if you don’t scale it up, the image will look black. This
happens because all the pixel values will be close to zero after you superimpose the mask
on the input image. After this, we iterate through all the color channels and apply the mask
to each channel.

Detecting Edges and Applying Image Filters Chapter 2

[ 52 ]

How do we move the focus around?
We now know how to create a vignette filter that focuses on the center of the image. Let’s
say we want to achieve the same vignette effect, but we want to focus on a different region
in the image, as shown in the following figure:

All we need to do is build a bigger Gaussian kernel, and make sure that the peak coincides
with the region of interest. Following is the code to achieve this:

Detecting Edges and Applying Image Filters Chapter 2

[ 53 ]

Enhancing the contrast in an image
Whenever we capture images in low-light conditions, the images turn out to be dark. This
typically happens when you capture images in the evening, or in a dimly lit room. You
must have seen this happen many times! The reason this happens is because the pixel
values tend to concentrate near zero when we capture the images under such conditions.
When this happens, a lot of details in the image are not clearly visible to the human eye. The
human eye likes contrast, and so we need to adjust the contrast to make the image look nice
and pleasant. A lot of cameras and photo applications implicitly do this already. We use a
process called histogram equalization to achieve this.

To give an example, this is what it looks like before and after contrast enhancement:

As we can see here, the input image on the left is really dark. To rectify this, we need to
adjust the pixel values so that they are spread across the entire spectrum of values, that is,
between 0-255.

Detecting Edges and Applying Image Filters Chapter 2

[ 54 ]

Following is the code for adjusting the pixel values:

Histogram equalization is applicable to grayscale images. OpenCV provides a function,
, to achieve this effect. As we can see here, the code is pretty

straightforward, where we read the image and equalize its histogram to normalize the
brightness and increase the contrast of the image.

How do we handle color images?
Now that we know how to equalize the histogram of a grayscale image, you might be
wondering how to handle color images. The thing about histogram equalization is that it’s a
nonlinear process. So, we cannot just separate out the three channels in an RGB image,
equalize the histogram separately, and combine them later to form the output image. The
concept of histogram equalization is only applicable to the intensity values in the image. So,
we have to make sure not to modify the color information when we do this.

In order to handle the histogram equalization of color images, we need to convert it to a
color space, where intensity is separated from the color information. YUV is a good example
of such a color space, as the YUV model defines a color space in terms of one
Luminance (Y) and two Chrominance (UV) components. Once we convert it to YUV, we
just need to equalize the Y-channel and combine it with the other two channels to get the
output image.

Detecting Edges and Applying Image Filters Chapter 2

[ 55 ]

Following is an example of what it looks like:

Here is the code to achieve histogram equalization for color images:

Detecting Edges and Applying Image Filters Chapter 2

[ 56 ]

Summary
In this chapter, we learned how to use image filters to apply cool visual effects to images.
We discussed the fundamental image processing operators, and how we can use them to
build various things. We learned how to detect edges using various methods. We
understood the importance of 2D convolution and how we can use it in different scenarios.
We discussed how to smooth, motion-blur, sharpen, emboss, erode, and dilate an image.
We learned how to create a vignette filter, and how we can change the region of focus as
well. We discussed contrast enhancement and how we can use histogram equalization to
achieve it.

In the next chapter, we will discuss how to cartoonize a given image.

Pages (550 words)
Approximate price: -

Why Work with Us

Top Quality and Well-Researched Papers

We always make sure that writers follow all your instructions precisely. You can choose your academic level: high school, college/university or professional, and we will assign a writer who has a respective degree.

We have a team of professional writers with experience in academic and business writing. Many are native speakers and able to perform any task for which you need help.

Free Unlimited Revisions

If you think we missed something, send your order for a free revision. You have 10 days to submit the order for review after you have received the final document. You can do this yourself after logging into your personal account or by contacting our support.

Prompt Delivery and 100% Money-Back-Guarantee

All papers are always delivered on time. In case we need more time to master your paper, we may contact you regarding the deadline extension. In case you cannot provide us with more time, a 100% refund is guaranteed.

Original & Confidential

We use several writing tools checks to ensure that all documents you receive are free from plagiarism. Our editors carefully review all quotations in the text. We also promise maximum confidentiality in all of our services.

Our support agents are available 24 hours a day 7 days a week and committed to providing you with the best customer experience. Get in touch whenever you need any assistance.

Try it now!

## Calculate the price of your order

Total price:
\$0.00

How it works?

Fill in the order form and provide all details of your assignment.

Proceed with the payment

Choose the payment system that suits you most.

Our Services

No need to work on your paper at night. Sleep tight, we will cover your back. We offer all kinds of writing services.

## Essay Writing Service

No matter what kind of academic paper you need and how urgent you need it, you are welcome to choose your academic level and the type of your paper at an affordable price. We take care of all your paper needs and give a 24/7 customer care support system.