2

I am trying to screen cast my android device screen to a web browser using projection API and Webrtc.

Projection API renders its output to a Surface and returns a virtualDisplay. I have done till this. I saw the webrtc library for android. they have made it to receive input only from device camera. I am trying to read and modify webrtc code to stream whatever is shown to the surface.

My question is How can i receive byte[] data from a surface regularly like the Camera.PreviewCallback function. What other available options i have?

fadden
  • 51,356
  • 5
  • 116
  • 166
shubendrak
  • 2,038
  • 4
  • 27
  • 56
  • If the frames are small, you can pass them whole. If they're infrequent, you can convert each one to JPEG or PNG. If you want something approaching the display refresh rate, you'll need to use the video encoder to convert them to a video stream. Your question is a bit vague for stackoverflow -- this site is intended for specific programming questions. – fadden Jul 06 '15 at 21:28
  • @fadden Could you have a look at this question: http://stackoverflow.com/questions/31183009/use-surfacetexture-to-render-video-stream-to-android-ordinary-view-above-api-lev I just need you to say Yes or No. – dragonfly Jul 07 '15 at 05:14
  • @fadden I have updated the question. Please provide your comment. If i am thinking in a wrong direction. please guide me to right direction. – shubendrak Jul 10 '15 at 06:47
  • In theory you can use an ImageReader Surface (https://developer.android.com/reference/android/media/ImageReader.html), but I don't know if it'll work with RGB888 surfaces (it's meant for Camera YUV output). You might need to receive frames on a SurfaceTexture surface, render them to a pbuffer, and extract them with `glReadPixels()`. Do you need the `byte[]`? Depending on what it is you're trying to do there might be a way to let the GPU do it. – fadden Jul 10 '15 at 15:49
  • @fadden I have tried ImageReader, i got some output but Its zigzag pixels. I want to try the second part of your answer. Actually there are some c++ method in androidRTC, they pass the byte[] from camera in those c++ method and androidRTC take care of streaming it to a network device. I can not modify c++ code as i dont know much about whats going inside. So currently i am trying to pass byte[] from surface to those c++ methods. it would be helpful if you point me to few resources which would help me learn how to do all this. – shubendrak Jul 10 '15 at 16:10
  • 1
    Zig-zag pixels are usually the result of ignoring the stride -- make sure you're using `getPixelStride()` and `getRowStride()`. ImageReader is the fastest and easiest way to access Surface content with the CPU. The alternative is to do something like http://bigflake.com/mediacodec/#ExtractMpegFramesTest, which sends video to a SurfaceTexture, renders it, and extracts the pixels with `glReadPixels()`. – fadden Jul 10 '15 at 17:56

1 Answers1

2

Here is how i solved my problem. I used ImageReader class like

imageReader = ImageReader.newInstance(displayWidth, displayHeight, PixelFormat.RGBA_8888, 2);
mediaProjection.createVirtualDisplay("screencapture",
                    displayWidth, displayHeight, density,
                    flags, imageReader.getSurface(), null, handler);
imageReader.setOnImageAvailableListener(new ImageAvailableListener(), null);


private class ImageAvailableListener implements ImageReader.OnImageAvailableListener {
    @Override
    public void onImageAvailable(ImageReader reader) {
        Image image = null;
        Bitmap bitmap = null;

        ByteArrayOutputStream stream = null;

        try {
            image = imageReader.acquireLatestImage();
            if (image != null) {
                Image.Plane[] planes = image.getPlanes();
                ByteBuffer buffer = planes[0].getBuffer();
                int pixelStride = planes[0].getPixelStride();
                int rowStride = planes[0].getRowStride();
                int rowPadding = rowStride - pixelStride * displayWidth;

                // create bitmap
                bitmap = Bitmap.createBitmap(displayWidth + rowPadding / pixelStride,
                        displayHeight, Bitmap.Config.ARGB_8888);
                bitmap.copyPixelsFromBuffer(buffer);
                    stream = new ByteArrayOutputStream();
                    bitmap.compress(Bitmap.CompressFormat.JPEG, 50, stream);
                    StringBuilder sb = new StringBuilder();
                    sb.append("data:image/png;base64,");
                    sb.append(StringUtils.newStringUtf8(Base64.encode(stream.toByteArray(), Base64.DEFAULT)));
                WebrtcClient.sendProjection(sb.toString());
            }

        } catch (Exception e) {
            e.printStackTrace();
        }

I am converting byte[] to Base64 string and sending to through webrtc datachannel.

shubendrak
  • 2,038
  • 4
  • 27
  • 56
  • How was the performance of the ImageReader in this case, in terms of FPS? And why did you choose to send the stream through the datachannel and not the regular video channel? – Michael P Aug 18 '15 at 17:44
  • @MichaelP The performance of ImageReader is quite good. If you don't wish to deal with surfaceTexture then ImageReader is the fastest way to get byte[] from surface. I was using webrtc-android sdk for this project. They have harcoded video channel to work with device camera. I tried changing this behaviour by editing their source code and got very little success. The above solution worked perfectly for me. – shubendrak Aug 19 '15 at 04:23
  • @Shubenda, thanks for your reply. I'm actually trying to do the same thing, and I spend all day today, trying to figure out how to modify their code to do that. This is my new question http://stackoverflow.com/questions/32084067/android-webrtc-custom-capturer They really tightly coupled the Java wrapper with the C++ implementation. You mentioned SurfaceTexture, I don't think a SurfaceTexture can give you an input Surface (Or take a surface), so it seems like the only solution is to use an ImageReader. I just wanted to make sure it's not as slow as calling glReadPixels, and it can do 30 FPS – Michael P Aug 19 '15 at 04:31
  • @shubendrak: I am exactly in the same boat but not using Data Channels. Can you please the code? – user2801184 Nov 16 '16 at 20:36
  • Can you please help me with libraries used to capture the screen? And info about webRTC libarary which receives theBase64stream. – user2801184 Nov 16 '16 at 20:46