2

I am currently trying to implement color picking through the device camera in an expo project with expo-camera. In order to do this, I need to access the color information pertaining to a specific pixel on the screen, picked according to user interaction.

What would be the best way to get that information?

My first thought was to snap a picture and use the tap coordinates to find the pixel I’m searching for in the image, and then extract the information from that image. To that end, I tried using the react-native-pixel-color library (and similar ones), but it seems it’s been abandoned.

Please note that I do not currently know Swift so coding a native solution is mostly a last resort if all else fails.

Thanks!

John
  • 10,165
  • 5
  • 55
  • 71
VicHofs
  • 181
  • 1
  • 2
  • 12
  • Did you ever solve this? I am experiencing the same problem with `expo-camera`. – bearacuda13 May 24 '21 at 16:33
  • 1
    @bearacuda13 not as of yet. I plan on coding a working native module myself soon, but it is not a priority since this is for a personal project. Good luck if I can’t be of help :) – VicHofs May 24 '21 at 16:35

1 Answers1

0

I figured it out. Good news: compatible with fully managed Expo. Bad news: it's extremely ugly (your module would still be worth writing).

The summary is:

  1. Use expo-camera to display the view from the user’s camera
  2. Use expo-gl to load the Camera view from expo-camera into a GLView
  3. Use expo-gl to take a snapshot of the GLView
  4. Use expo-image-manipulator to reduce the snapshot taken in step 3 to the desired region on the screen
  5. Use react-native-canvas to read the pixels into an ImageData object

Some of the code: First, my imports:

import { Camera } from "expo-camera";
import React, { useState, useEffect, useRef, useCallback } from "react";
import { GLView } from "expo-gl";
import * as FileSystem from "expo-file-system";
import * as ImageManipulator from "expo-image-manipulator";
import Canvas, { Image as CanvasImage } from "react-native-canvas";
import { StyleSheet, Text, TouchableOpacity, View, Image } from "react-native";

Steps 1 and 2 are accomplished here: https://github.com/expo/expo/blob/master/apps/native-component-list/src/screens/GL/GLCameraScreen.tsx

The only touch-up necessary is to change the end of fragShaderSource to the following:

void main() {
  fragColor = vec4(texture(cameraTexture, uv).rgb, 1.0);
}`;

This is because the demo by Expo inverts colors.

Step 3: pass in the gl variable passed in with GLView's onContextCreate to this:

const takeFrame = async (gl) => {
  const snapshot = await GLView.takeSnapshotAsync(gl, {
    format: "jpeg",
  });
  return snapshot.uri;
};

Step 4: pass in the uri from takeFrame here:

const cropFrame = async (uri) => {
  // Find minX, minY, width, height

  const result = await ImageManipulator.manipulateAsync(uri, [
    {
      crop: {
        originX: minX,
        originY: minY,
        width,
        height
      },
    },
  ]);

  return result.uri;
};

Step 5: The base64 version of the image needs to be extracted and passed into react-native-canvas:

const readImage = async (imgSrc, width, height) => {
  setImgUri(imgSrc);
  canvas.width = width;
  canvas.height = height;
  const context = canvas.getContext("2d");
  const image = new CanvasImage(canvas);

  const options = { encoding: "base64", compress: 0.4 };
  const base64 = await FileSystem.readAsStringAsync(imgSrc, options);
  const src = "data:image/jpeg;base64," + base64;
  image.src = src;
  image.addEventListener("load", () => {
    context.drawImage(image, 0, 0);
    context
      .getImageData(0, 0, canvas.width, canvas.height)
      .then((imageData) => {
        console.log(
          "Image data:",
          imageData,
          Object.values(imageData.data).length
        );
      })
      .catch((e) => {
        console.error("Error with fetching image data:", e);
      });
  });
};

Please let me know if there's a better way to do this :)

bearacuda13
  • 1,779
  • 3
  • 24
  • 32
  • Can you please further explain how to accomplish step 2? – Joseph Balnt Jul 23 '21 at 16:33
  • @JosephBalnt if you want this approach, you should start a new project and copy the code at the link, then tweak it to fit your needs. I got all these steps to work and it was tragically slow. I would actually recommend trying tensorflow react native module and using the tensorflow camera setup. They do a very similar process but whatever the specific differences are make it a lot faster. There’s some quirks with tensorflow, but far less than my method. I would have the same recommendation, copy the example tensorflow react native file on their GitHub and bend it to your needs- good luck! – bearacuda13 Jul 23 '21 at 16:38
  • Thank you for you comment, but could you please show me an example / link to implement tensorflow,js React Native API? I'm still confused on how to approach this problem. – Joseph Balnt Jul 23 '21 at 18:27
  • You are missing the implementation of `setImgUri` in your answer. – xgmexgme Jun 22 '22 at 05:09