Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Android camera pixel manipulation

I am trying to build a camera app that takes in the camera preview, manipulates the pixels, and then displays the new image. I need the manipulation to happen in real time.

From what I have read online, and from questions here, you need to make a custom surface view and manipulate the pixel array from the onPreviewFrame method. I have built a custom surface view and have this method running. I have converted the YUV to RGB.

Now, my question is, how do I display this new pixel array on the screen in real time? Do I somehow return it in the onPreviewFrame method? Do I have to change the byte[] array? Do I take my new pixel array and display it using a Bitmap? Is there a way to get the byte[] array from the camera preview without even displaying the preview?

If someone could answer these questions, with code examples that would be great! I am kind of new to Android, so I need the answers explained well enough for me to understand. Here is part of the code I have that runs the camera preview:

public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
        // If your preview can change or rotate, take care of those events here.
        // Make sure to stop the preview before resizing or reformatting it.

        if (mHolder.getSurface() == null){
          // preview surface does not exist
          return;
        }

        // stop preview before making changes
        try {
            mCamera.stopPreview();
        } catch (Exception e){
          // ignore: tried to stop a non-existent preview
        }

        // start preview with new settings
        try {
            //parameters.setPreviewSize(w, h);
            mCamera.setParameters(parameters);
            mCamera.setPreviewDisplay(mHolder);
            mCamera.setPreviewCallback(new PreviewCallback() {
                public void onPreviewFrame(byte[] data, Camera camera)
                {
                    System.out.println("onPreviewFrame");
                    //transforms NV21 pixel data into RGB pixels  
                    decodeYUV420SP(pixels, data, previewSize.width,  previewSize.height);  

//Outuput the value of the top left pixel in the preview to LogCat  
                    Log.i("Pixels", "The top right pixel has the following RGB (hexadecimal) values:"  
                           +Integer.toHexString(pixels[0]));  


                }
            });
            mCamera.startPreview();

        } catch (Exception e){
            Log.d(null, "Error starting camera preview: " + e.getMessage());
        }
    }

This gives me the rgb pixel array I want to display instead of the preview. How do I do this?

like image 415
MikeShiny Avatar asked Jan 16 '26 20:01

MikeShiny


1 Answers

You cannot manipulate the surface that you connected to camera preview. The byte array you receive in onPreviewFrame() is just a copy of what the framework displays on the screen. Moreover, you will find that the two streams are asynchronous: you can slow down the callbacks (e.g. by adding some sleep() into your callback), but the preview surface will be updated nevertheless.

You can hide the preview SurfaceView by placing other views on top of it, or you can get rid of this view altogether by using setPreviewTexture() instead of setPreviewDisplay() (note: Added in API level 11). Hiding the surface is not as easy as it may seem: the framework may pop it up to the top, it requires careful synchronization of camera start or restart with layout.

Anyway, after you have the surface hidden, you can use the byte array received in onPreviewFrame() to generate an image and display it. You are free to manipulate the pixels to your liking. I believe that the optimal technique is to send the pixel data to OpenGL: you can use a shader to offload YCrCb (NV21) to RGB conversion to GPU.

like image 119
Alex Cohn Avatar answered Jan 19 '26 18:01

Alex Cohn