I figured it out. Good news: compatible with fully managed Expo. Bad news: it's extremely ugly (your module would still be worth writing).
The summary is:
- Use expo-camera to display the view from the user’s camera
- Use expo-gl to load the Camera view from expo-camera into a GLView
- Use expo-gl to take a snapshot of the GLView
- Use expo-image-manipulator to reduce the snapshot taken in step 3 to the desired region on the screen
- Use react-native-canvas to read the pixels into an ImageData object
Some of the code:
First, my imports:
import { Camera } from "expo-camera";
import React, { useState, useEffect, useRef, useCallback } from "react";
import { GLView } from "expo-gl";
import * as FileSystem from "expo-file-system";
import * as ImageManipulator from "expo-image-manipulator";
import Canvas, { Image as CanvasImage } from "react-native-canvas";
import { StyleSheet, Text, TouchableOpacity, View, Image } from "react-native";
Steps 1 and 2 are accomplished here:
https://github.com/expo/expo/blob/master/apps/native-component-list/src/screens/GL/GLCameraScreen.tsx
The only touch-up necessary is to change the end of fragShaderSource
to the following:
void main() {
fragColor = vec4(texture(cameraTexture, uv).rgb, 1.0);
}`;
This is because the demo by Expo inverts colors.
Step 3: pass in the gl variable passed in with GLView's onContextCreate to this:
const takeFrame = async (gl) => {
const snapshot = await GLView.takeSnapshotAsync(gl, {
format: "jpeg",
});
return snapshot.uri;
};
Step 4: pass in the uri from takeFrame here:
const cropFrame = async (uri) => {
// Find minX, minY, width, height
const result = await ImageManipulator.manipulateAsync(uri, [
{
crop: {
originX: minX,
originY: minY,
width,
height
},
},
]);
return result.uri;
};
Step 5: The base64 version of the image needs to be extracted and passed into react-native-canvas:
const readImage = async (imgSrc, width, height) => {
setImgUri(imgSrc);
canvas.width = width;
canvas.height = height;
const context = canvas.getContext("2d");
const image = new CanvasImage(canvas);
const options = { encoding: "base64", compress: 0.4 };
const base64 = await FileSystem.readAsStringAsync(imgSrc, options);
const src = "data:image/jpeg;base64," + base64;
image.src = src;
image.addEventListener("load", () => {
context.drawImage(image, 0, 0);
context
.getImageData(0, 0, canvas.width, canvas.height)
.then((imageData) => {
console.log(
"Image data:",
imageData,
Object.values(imageData.data).length
);
})
.catch((e) => {
console.error("Error with fetching image data:", e);
});
});
};
Please let me know if there's a better way to do this :)