I'm using PyQt to display images the following way (I'm omitting some tedious details just to show the general approach):
class MyGraphicsView(QGraphicsView):
def __init__(self, parent = None):
super(MyGraphicsView, self).__init__(parent)
self.scene = QGraphicsScene()
qImage = QImage(myPixels.data, numColumns, numRows, QImage.Format_Indexed8)
qImage.setColorTable(self.gray_color_table)
self.sceneImage = self.scene.addPixmap(QPixmap.fromImage(qImage))
Everything works and the image displays in my window properly. I then display portions of the image by doing the following:
self.fitInView(roiCoords[0], roiCoords[2], roiCoords[1]-roiCoords[0], roiCoords[3]-roiCoords[2], Qt.KeepAspectRatio)
where "roiCoords" is just a list containing xmin, xmax, ymin, ymax that are generated programmatically at runtime. The problem arises when the user resizes the window in which the image is being displayed. I'm not invoking "fitInView(...)" in the windows resize event because I prefer the default behavior i.e. the zoom remains the same but as the user resizes the window the image is cropped (or more image comes into view). My question is how to determine the portion of the image (pixmap) that is viewable when the user resizes the window so that they can be stored (the reason for this is that I'm trying to update a region of interest rectangle that's being drawn on top of a smaller representation of the image and need to keep this in sync in terms of its aspect ratio and position as the user resizes the window that contains the image.) I've hunted around but can't find anything. Any help greatly appreciated.
Update: I just realized I can probably check the window dimensions in the resize event and then map to scene. Some variant of the following:
def resizeEvent(self, event):
newWidth = self.width()
newHeight = self.height()
uperLeftROICoord = self.graphicsView.mapToScene(0, 0)
lowerRightROICoord = self.graphicsView.mapToScene(newWidth - 1, newHeight - 1)