Open CV & Python – Feature Detection

OpenCV & Python - Feature Detection header

In OpenCV, Feature Detection is a technique used to identify distinctive key points or features in an image. These keypoints are locally unique and can be used to compare and match different images, facilitating operations such as object recognition, motion tracking, 3D reconstruction, and other computer vision applications.

[wpda_org_chart tree_id=38 theme_id=50]

What are Features?

“Features” in Image Analysis are distinctive points or regions of an image that are particularly significant for the description and analysis of the image itself. These features are chosen because they are locally unique and can be used to identify or compare different parts of an image. In the context of feature detection, these are key points or regions that have distinctive properties compared to their neighborhood.

Features play a crucial role in many computer vision and image analysis applications. Here are some of the main roles:

  • Matching and Alignment: Features are often used to compare different images or video frames. Once key features are identified and extracted, you can search for matches between them to align or compare images.
  • Object Recognition: In object recognition applications, features are used to identify specific parts or distinctive characteristics of objects. This is useful for recognizing objects in different poses or lighting conditions.
  • Motion Tracking: In motion tracking, features can be used to follow the progress of key points across successive frames of a video. This is useful, for example, in tracking the movement of an object or person.
  • 3D reconstruction: In the context of stereoscopic vision, features can be used to estimate the three-dimensional geometry of a scene. By identifying the same features in multiple images, it is possible to calculate the depth and reconstruct the scene in 3D.
  • Dimension Reduction: In applications where it is necessary to reduce the complexity of the image, features can be used to more compactly represent significant information. For example, using feature descriptors instead of the entire image.
  • Motion Estimation and Alignment: In the field of video surveillance or motion analysis, features are used to estimate the movement of objects or to align different video sequences.

Different feature detection algorithms, such as Harris Corner Detection, Shi-Tomasi, FAST, and others, aim to locate key points that are robust to variations in illumination, rotations, and scales. Once identified, these features can be used in a variety of contexts to analyze, understand and process the visual information contained in images.

Feature Detection

Feature Detection is a fundamental image analysis technique that focuses on identifying and locating distinctive key points or significant regions in an image. These keypoints are chosen because they are locally unique and can be used as salient references to perform various image analysis and manipulation operations.

Imagine looking at an image and wanting to locate the most interesting or significant parts, such as the corners of a building, the edges of an object, or regions with unique patterns. Feature detection aims to automatically identify these key points, allowing the information contained in the image to be described effectively and distinctively.

These key points can then be used for various purposes. For example, in object recognition, features are used to identify and distinguish specific parts of an image. In motion tracking, features are followed through successive frames to track the movement of an object over time. Furthermore, in 3D reconstruction applications, features are fundamental for estimating the three-dimensional geometry of a scene.

The Feature Detection process involves the use of algorithms that analyze the light intensity of pixels in the image, looking for patterns or structures that are relevant to the specific application. These algorithms identify key points based on properties such as changes in intensity, angles, and other distinctive attributes.

In essence, Feature Detection is a key step in understanding and processing visual information, providing an effective way to identify and use unique features present in images in order to perform more complex visual analysis and interpretation tasks.

Some of the Feature Detection Algorithms

OpenCV provides several algorithms for feature detection, including:

  • Harris Corner Detection
  • Shi-Tomasi Corner Detection
  • FAST (Features from Accelerated Segment Test)
  • ORB (Oriented FAST and Rotated BRIEF)
  • SIFT (Scale-Invariant Feature Transform)

Let’s now look at a small series of examples to understand how each of them works. For those who would like to delve deeper, there are some in-depth articles to consult on the site.

To do these examples we will use this image:

blackandwhite

Click on the image above and download it to your computer, saving it as blackandwhite.jpg.

Harris Corner Detection

Harris Corner Detection is an algorithm used to detect corners in images. Identify key points based on the change in light intensity in the vicinity of a specific point.

import cv2
import numpy as np
import matplotlib.pyplot as plt

img = cv2.imread('blackandwhite.jpg', 0)
dst = cv2.cornerHarris(img, 2, 3, 0.04)

plt.title('Harris Corners')
plt.imshow(dst, cmap='jet')
plt.show()

Executing this results:

Harris Corner example

If you are interested in learning more about this method, here is the in-depth article on the Harris Corner Detection technique.

OpenCV and Python - Harris Corner Detection - a method to detect corners in an image

IN-DEPTH ARTICLE

Harris Corner Detection

Shi-Tomasi Corner Detection

The Shi-Tomasi Corner Detection algorithm is very similar to Harris Corner Detection, but is considered an improvement in terms of key point selection. In fact, it uses a corner quality evaluation metric and selects key points based on this metric. Let’s see an example here:

import cv2
import numpy as np
import matplotlib.pyplot as plt

img = cv2.imread('blackandwhite.jpg', 0)
corners = cv2.goodFeaturesToTrack(img, maxCorners=25, qualityLevel=0.01, minDistance=10)

plt.imshow(img, cmap='gray')
plt.title('Shi-Tomasi Corners')
corners = np.intp(corners)
for i in corners:
    x, y = i.ravel()
    plt.scatter(x, y, color='red', s=30)
plt.show()

Executing you get the following result:

Shi-Tomasi Corner Detection example

FAST (Features from Accelerated Segment Test)

FAST is a faster algorithm for feature detection. Use an accelerated segment test to quickly determine key points. FAST is known for its computational efficiency. Also in this case we see a short example always applied on the same image:

import cv2
import matplotlib.pyplot as plt

img = cv2.imread('blackandwhite.jpg', 0)
fast = cv2.FastFeatureDetector_create()

# Trova i punti chiave
kp = fast.detect(img, None)

img_with_keypoints = cv2.drawKeypoints(img, kp, None, color=(255, 0, 0))
plt.imshow(img_with_keypoints)
plt.title('FAST Features')
plt.show()

Running the code you get the following result:

Fast Feature example

ORB (Oriented FAST and Rotated BRIEF)

ORB is an algorithm that combines FAST (Features from Accelerated Segment Test) detection with the BRIEF (Binary Robust Independent Elementary Features) descriptor. FAST is used to locate key points, while BRIEF generates binary descriptors for these points. ORB is known for its computational speed and robustness, making it suitable for real-time applications.

Here is an example for this algorithm too:

import cv2
import matplotlib.pyplot as plt

img = cv2.imread('blackandwhite.jpg', 0)
orb = cv2.ORB_create()

kp, des = orb.detectAndCompute(img, None)

img_with_keypoints = cv2.drawKeypoints(img, kp, None, color=(255, 0, 0))
plt.imshow(img_with_keypoints)
plt.title('ORB Features')
plt.show()

Running you will get this result:

ORB Features example

SIFT (Scale-Invariant Feature Transform)

SIFT is a feature detection algorithm that identifies scale- and orientation-invariant keypoints. It uses a combination of edge localization, Gaussian filtering, and other techniques to find robust keypoints. SIFT is known for its scale invariance, making it suitable for object recognition under different conditions.

Let’s see an example in this case too:

import cv2
import matplotlib.pyplot as plt

img = cv2.imread('blackandwhite.jpg', 0)
sift = cv2.SIFT_create()

# Trova i punti chiave e i descrittori con SIFT
kp, des = sift.detectAndCompute(img, None)

# Visualizza i risultati con Matplotlib
img_with_keypoints = cv2.drawKeypoints(img, kp, None, color=(255, 0, 0))
plt.imshow(img_with_keypoints)
plt.title('SIFT Features')
plt.show()

Executing we will get this result:

SIFT Features example

Conclusion

These are just a few examples, and OpenCV offers many other options for feature detection. The choice of algorithm often depends on the specific application and performance requirements.

Leave a Reply