I'm using ArSceneView ArFrame to get the camera image
arFragment.getArSceneView().getArFrame().acquireCameraImage()"
This returns an android.media Image model. I'm trying to convert this image to :
com.google.api.services.vision.v1.model.Image
The only way I can tell that I can do that is by converting the android.media Image to a btye[] and then using the byte[] to create the vision Image model. My problem is that I cannot figure out how to convert the android.media Image.
If anyone ever runs into this issue. I found a solution:
Using the android.media Image model we can convert to byte[] using this -
byte[] data = null;
data = NV21toJPEG(
YUV_420_888toNV21(image),
image.getWidth(), image.getHeight());
private static byte[] YUV_420_888toNV21(Image image) {
byte[] nv21;
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
ByteBuffer uBuffer = image.getPlanes()[1].getBuffer();
ByteBuffer vBuffer = image.getPlanes()[2].getBuffer();
int ySize = yBuffer.remaining();
int uSize = uBuffer.remaining();
int vSize = vBuffer.remaining();
nv21 = new byte[ySize + uSize + vSize];
//U and V are swapped
yBuffer.get(nv21, 0, ySize);
vBuffer.get(nv21, ySize, vSize);
uBuffer.get(nv21, ySize + vSize, uSize);
return nv21;
}
private static byte[] NV21toJPEG(byte[] nv21, int width, int height) {
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuv = new YuvImage(nv21, ImageFormat.NV21, width, height, null);
yuv.compressToJpeg(new Rect(0, 0, width, height), 100, out);
return out.toByteArray();
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With