Skip to content

NimaSamadi007/Image_Processing_Course

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Image_Processing_Course

Image Processing Course Assignments

HW1

Q1 and Q2

These questions are about enhancing two dark photo's qualities. The origianl images are quite dark and are not pleasent for human's eyes. So, I enchanced them using gamma transformation, contrast stretching thecniques. These are origianl images:

And the enhanced images are:

Q3

In Q3 I wrote a code to convert Prokudin-Gorskii black and white images to colory jpg images. The Prokudin-Gorskii' images are in .tif format with separated blue, red, and green channels. q3.py can convert them properly. I've choose Amir, Mosque and Train images to test program. These program converts a 16 bit image to 8 bit jpg colory image. I'v used gaussian pyramid to improve speed while preserving the accuracy. After finding best matches for each channel, we should clip each side with a proper value (since the sides of the original images are in a bad shape). This procedure is done automatically.

The results are as below:

Q4

This program changes the flowers color to pink and blures the background:

Q5

Filtering an image by using OpenCV filter2D function, using naive double-for-loop implementation, and matrix addition method (which is faster). The time for each method is written under each image:

Opencv filter2D method Double-for Method Matrix Addition Method
‫‪0.01636419900000008‬‬s ‫‪137.183767327‬‬s ‫‪0.2271269970000276‬‬s

Q6

Histogram specification code to enhance an image's quality

HW2

Q1, Image sharpening techniques:

Image sharpaening using spatial and frequency domain tools. Original image is blured and we wish to sharp it using unsharp mask. This is the original, non sharp image:

blured image

The sharped images are as below:

Q2, Simple template matching problem:

In this problem I used zero mean cross correlation method to match a given template with an image. The patch is a pipe which we want to find it in image. The result is as below:

Q3:

In this problem we want to extract three books form an image. The books are rotated and there is a little perspective in the picture which makes it a bit hard to derive best results. I choosed every four corner of each book and fitted a homography transformation using opencv. Finally the image is warped using myWarpFunction - which I'v implemented it. The origianl image is: And three extracted books are:

Q4, Hybrid images:

Hybrid images are kind of delusional. From near, you can see an image. As you go back and get away from the image, it seems you are observing another image. It happens as we interpret details when we are close enough to the image while from distance we can only see the overall shape. The details are high frequent component in image and the overall shape is composed by low frequency components. So, I used it to generate hybrid images. You can find the article from Here. I choosed these images:

The motorcycle image will be seen from far while the bycicle image will be seen from near. The resulting hybrid image is:

The hybrid image is smalled so you can see what it will look like when you see it from a distance.


HW3

Q1, Hough Transform

In this problem, I tried to detect points on the squares of the chess area.

I've used hough transform tecnique along with extra methods to detect lines. Then I've found the intersection of each pair of lines. The code can work pretty on wide range of simillar images. Below you can see the results:

Although I haven't detected all of the coreners but the method is pretty image-independent and could be appiled to simillar images. For itermediate reults, refer to the HW3 directory.

Q2, Texture Synthesis

Synthesising 2500x2500 larg textures of a given small, less than 500x500 image has been done in this problem. Below are some of the given, small images:

The textures are generated by finding proepr patch at each step. First a small patch is selceted from source image. Next, by template matching, I find a proper patch which is simillar to the previous patch in terms of a thin right stripe in the right of it. After that, I merge two images by finding minimum cut in which the cost is minimized. This will eventually result in better visualization of the final image.

By continuing this procedure, the first row of the target image is completed. Next, we should generate next rows. The only difference is the common area between filled part of the target image and source image changes. For the first patches of each row, the common area looks like a rectangle which is the lower part of the last completed row. For other patches, the common area is a L-shape area. The logic is the same only the common area changes at each step.

Below are the results of performing the algorithm:

Q3, Hole filling

Here we want to remove the person and birds from each image while the final result looks natural to human eyes. The source pictures are:

I've used the texture synthesis method which was explained in the previous part. The results looks like these:

Although the result isn't perfect and we must use other methods like Patch Match.


HW4

Q1, K-means

In this problem I implemented the K-means clustering algorithm. It's not necessarily a image segmentation algorithm but it can be used as a simple method of doing so. We are trying to cluster the points given in the Points.txt file. Below is the representation of the points in 2D space:

As it is obvious from the plot, the best segmentation would be to divide points into two circle. However, default K-means method (with K=2) will result in the following segmentation:

But if we change the features to the distance from origin, K-means will behave as expected:

Q2, Mean-shift

Mean-shift is a segmentation algorithm that can be used to segment pixels based on their color and distance distibution. In this question after grouping similar pixels into one cluster, I replace all of the pixels' color with an average color. However the algorithm is pretty slow. Below you can see the original and segmented image:

Q3, SLIC

SLIC is an oversegmentation method which can be used as initial stages of segmenting images. We are trying to oversegment the image and find superpixels. The number of cluster must be given before starting algorithm. This is the original image which will be oversegmented:

The result of different clusters can be seen below:

64 clusters 256 clusters
1024 clusters 2048 clusters

Q4, Segmentation

In this problem we are trying to extract birds from this image:

As you can see, it's not straightforward to choose the best method of segmentation (at first glance it's even hard to recognize the birds!).

I've used the "grabcut" algorithm along with "contours" and some modifications. "Grabcut" runs multiple times and by using filters, morphology, voting system, and contours I could achieve this result:

Though the result is not perfect. Plus, as "grabcut" is pretty random algorithm, the result might slightly change. However I've run the algorithm multiple times and used a voting method to preserve the dominated contoures.

Q5, Active contours

In this problem we are using "active contours" to segment the "tasbih" from the image. This is the original image:

The user must draw an initial contour which encloses the desired object. This a typical contour:

The snake might stuck at some phases. So in those stages, I randomly push points to the center of contour. You can see a short movie of initial contour and the way it changes its shape to fit the "tasbih" in `HW4/contour.mp4/ file.

This is the final result:


HW5

Q1, morhping

Here we are morhping two faces together in a way that is natural for human's eyes. The two original images are:

res01.jpg res02.jpg

In res01-points.txt and res02-points.txt initial corresponded points in each image have been written. These are the points:

points of res01.jpg points of res02.jpg

Next in the code I use Delaunay algorithm to draw triangles between points in res01.jpg. But I only find triangles in res01.jpg and transfer it to res02.jpg. By doing so I assure that the triangles are in similar and related areas of both images. These are the trinagles:

triangles of res01.jpg triangles of res02.jpg

After that it is time to find an affine transformation for each triangle in res01.jpg to the corresponded triangle in res02.jpg. When I found the transformation, we must transfer points with a predefined coeffeicent (in range of [0,1]) and finally merge images at each step. The final result is a 3 second gif file that can be seen below:

Q2, Poisson blending

Poisson blending is a very interesting method to blend two different images in a way that looks quite natural. Here I've chosed two images as below:

res05.jpg res06.jpg

I'm blending res05.jpg to res06.jpg. In Poisson blending we must solve a linear system equations. However the number of elemnts and the required memory is quite huge (for this image it needs about 14 GB RAM!). But, luckily the system is sparse. So, I used scipy package to solve the system. This is the result of blending and naively cropping the moon and putting it in res06.jpg:

blended image cropped image

As you can see the blended image is way more natural than cropped image.

Q3, Blending and feathering

Just like the previous question, again we are blending two images but not by using Poisson blending. We are using Laplacian stack to blend two images. The original images are:

res08.jpg res09.jpg

The dimmensions and the place of the cucumber doesn't match with the banana. So, before blending I warp the res08.jpg toward res09.jpg. After that, by means of Laplacian stack, I blend two images and create a new fruit! Instead of resizing the images, I change the Gaussian filte at each step. The final result is:

About

Image Processing Course Assignments

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages