Site icon Meccanismo Complesso

Open CV & Python – Feature Detection

OpenCV & Python - Feature Detection
OpenCV & Python - Feature Detection header

In OpenCV, Feature Detection is a technique used to identify distinctive key points or features in an image. These keypoints are locally unique and can be used to compare and match different images, facilitating operations such as object recognition, motion tracking, 3D reconstruction, and other computer vision applications.

[wpda_org_chart tree_id=38 theme_id=50]

What are Features?

“Features” in Image Analysis are distinctive points or regions of an image that are particularly significant for the description and analysis of the image itself. These features are chosen because they are locally unique and can be used to identify or compare different parts of an image. In the context of feature detection, these are key points or regions that have distinctive properties compared to their neighborhood.

Features play a crucial role in many computer vision and image analysis applications. Here are some of the main roles:

Different feature detection algorithms, such as Harris Corner Detection, Shi-Tomasi, FAST, and others, aim to locate key points that are robust to variations in illumination, rotations, and scales. Once identified, these features can be used in a variety of contexts to analyze, understand and process the visual information contained in images.

Feature Detection

Feature Detection is a fundamental image analysis technique that focuses on identifying and locating distinctive key points or significant regions in an image. These keypoints are chosen because they are locally unique and can be used as salient references to perform various image analysis and manipulation operations.

Imagine looking at an image and wanting to locate the most interesting or significant parts, such as the corners of a building, the edges of an object, or regions with unique patterns. Feature detection aims to automatically identify these key points, allowing the information contained in the image to be described effectively and distinctively.

These key points can then be used for various purposes. For example, in object recognition, features are used to identify and distinguish specific parts of an image. In motion tracking, features are followed through successive frames to track the movement of an object over time. Furthermore, in 3D reconstruction applications, features are fundamental for estimating the three-dimensional geometry of a scene.

The Feature Detection process involves the use of algorithms that analyze the light intensity of pixels in the image, looking for patterns or structures that are relevant to the specific application. These algorithms identify key points based on properties such as changes in intensity, angles, and other distinctive attributes.

In essence, Feature Detection is a key step in understanding and processing visual information, providing an effective way to identify and use unique features present in images in order to perform more complex visual analysis and interpretation tasks.

Some of the Feature Detection Algorithms

OpenCV provides several algorithms for feature detection, including:

Let’s now look at a small series of examples to understand how each of them works. For those who would like to delve deeper, there are some in-depth articles to consult on the site.

To do these examples we will use this image:

Click on the image above and download it to your computer, saving it as blackandwhite.jpg.

Harris Corner Detection

Harris Corner Detection is an algorithm used to detect corners in images. Identify key points based on the change in light intensity in the vicinity of a specific point.

import cv2
import numpy as np
import matplotlib.pyplot as plt

img = cv2.imread('blackandwhite.jpg', 0)
dst = cv2.cornerHarris(img, 2, 3, 0.04)

plt.title('Harris Corners')
plt.imshow(dst, cmap='jet')
plt.show()

Executing this results:

If you are interested in learning more about this method, here is the in-depth article on the Harris Corner Detection technique.

IN-DEPTH ARTICLE

Harris Corner Detection

Shi-Tomasi Corner Detection

The Shi-Tomasi Corner Detection algorithm is very similar to Harris Corner Detection, but is considered an improvement in terms of key point selection. In fact, it uses a corner quality evaluation metric and selects key points based on this metric. Let’s see an example here:

import cv2
import numpy as np
import matplotlib.pyplot as plt

img = cv2.imread('blackandwhite.jpg', 0)
corners = cv2.goodFeaturesToTrack(img, maxCorners=25, qualityLevel=0.01, minDistance=10)

plt.imshow(img, cmap='gray')
plt.title('Shi-Tomasi Corners')
corners = np.intp(corners)
for i in corners:
    x, y = i.ravel()
    plt.scatter(x, y, color='red', s=30)
plt.show()

Executing you get the following result:

FAST (Features from Accelerated Segment Test)

FAST is a faster algorithm for feature detection. Use an accelerated segment test to quickly determine key points. FAST is known for its computational efficiency. Also in this case we see a short example always applied on the same image:

import cv2
import matplotlib.pyplot as plt

img = cv2.imread('blackandwhite.jpg', 0)
fast = cv2.FastFeatureDetector_create()

# Trova i punti chiave
kp = fast.detect(img, None)

img_with_keypoints = cv2.drawKeypoints(img, kp, None, color=(255, 0, 0))
plt.imshow(img_with_keypoints)
plt.title('FAST Features')
plt.show()

Running the code you get the following result:

ORB (Oriented FAST and Rotated BRIEF)

ORB is an algorithm that combines FAST (Features from Accelerated Segment Test) detection with the BRIEF (Binary Robust Independent Elementary Features) descriptor. FAST is used to locate key points, while BRIEF generates binary descriptors for these points. ORB is known for its computational speed and robustness, making it suitable for real-time applications.

Here is an example for this algorithm too:

import cv2
import matplotlib.pyplot as plt

img = cv2.imread('blackandwhite.jpg', 0)
orb = cv2.ORB_create()

kp, des = orb.detectAndCompute(img, None)

img_with_keypoints = cv2.drawKeypoints(img, kp, None, color=(255, 0, 0))
plt.imshow(img_with_keypoints)
plt.title('ORB Features')
plt.show()

Running you will get this result:

SIFT (Scale-Invariant Feature Transform)

SIFT is a feature detection algorithm that identifies scale- and orientation-invariant keypoints. It uses a combination of edge localization, Gaussian filtering, and other techniques to find robust keypoints. SIFT is known for its scale invariance, making it suitable for object recognition under different conditions.

Let’s see an example in this case too:

import cv2
import matplotlib.pyplot as plt

img = cv2.imread('blackandwhite.jpg', 0)
sift = cv2.SIFT_create()

# Trova i punti chiave e i descrittori con SIFT
kp, des = sift.detectAndCompute(img, None)

# Visualizza i risultati con Matplotlib
img_with_keypoints = cv2.drawKeypoints(img, kp, None, color=(255, 0, 0))
plt.imshow(img_with_keypoints)
plt.title('SIFT Features')
plt.show()

Executing we will get this result:

Conclusion

These are just a few examples, and OpenCV offers many other options for feature detection. The choice of algorithm often depends on the specific application and performance requirements.

Exit mobile version