In deep learning, the image resolution is too large, resulting in memory overflow. So I want to split the image into small pieces and export the annotation information in json to a txt file. And the annotation information of each txt corresponds to each cropped small picture. How do I do that?
1 Answers
I had the same issue with my Ram when working with large images. Try to find the sweet spot in sample size to stay under the 4kb minimum file size and for fitting an integer number of samples on your image, for the best result in my experience. To your question, it depends on how you want to format your json if you use cv2 you could simply do it like this:
import cv2
import numpy as np
import json
def split_image(image_path, output_folder, grid_size):
# Load the image
image = cv2.imread(image_path,flags= cv2.IMREAD_COLOR)
# !!Swap Color space because cv2 uses BGR color space wehen reading colors!!
image = cv2.cvtColor(image , cv2.COLOR_BGR2RGB)
# Get dimensions
height, width, color = image.shape
# Calculate grid size based on the number of rows and columns
rows, cols = grid_size
grid_height = height // rows
grid_width = width // cols
for r in range(rows):
for c in range(cols):
# Crop the image to create small pieces
start_y = r * grid_height
end_y = start_y + grid_height
start_x = c * grid_width
end_x = start_x + grid_width
cropped_image = image[start_y:end_y, start_x:end_x]
# Process each cropped image, annotate, and gather annotation information
annotation_info = {
"file_name": f"cropped_{r}_{c}.jpg", # Update file name as needed
"image_size": (grid_width, grid_height), # Update image size accordingly
"annotations": [
# Your annotation details for this cropped image here
# You may need to use a dedicated annotation tool or manually
]
}
# Save the cropped image
cv2.imwrite(f"{output_folder}/cropped_{r}_{c}.jpg", cropped_image)
# Save annotation information to a JSON file
with open(f"{output_folder}/annotation_{r}_{c}.json", "w") as json_file:
json.dump(annotation_info, json_file)
# Example usage:
image_path = "path/to/your/image.jpg"
output_folder = "output_folder"
grid_size = (3, 3) # Split image into a 3x3 grid (you can adjust as needed)
split_image(image_path, output_folder, grid_size)
Annotation for information has to fit your usage so this is just an example!
you need to find the information that you need and input them if you need to. Note that using a dedicated annotation tool is your best bet.
To access your information use json.load("Filename")
and the used tools for your annotations.
Hope this helps because there isn't much information to gather from your question about details.
-
In fact, the most important thing is that I do not know how to match the annotation information in the original image with the segmented small image one by one.I am using labelme to annotate the object in the picture. How do I write code to get the object annotation for the new subgraph? – Lishumuzixin Jul 25 '23 at 06:11