This is related to OpenGL ES 2.0 :glReadPixels() with float or half_float textures.
I want to read out the float values from a framebuffer object after rendering.
On iOS, the following
GLint ext_type;
glGetIntegerv(GL_IMPLEMENTATION_COLOR_READ_TYPE, &ext_type);
really just tells us that glReadPixels only allows GL_UNSIGNED_BYTEs to be read out.
Is there a way to use the textured cache technique related in this article to get around this?
The back story is that I am trying to implement a general matrix multiplication routine for arbitrary-sized matrices (e.g., 100,000 x 100,000) using an OpenGL ES 2.0 fragment shader (similar to Dominik Göddeke's trusty ol' tutorial example). glReadPixel is not being particularly cooperative here because it converts the framebuffer floats to GL_UNSIGNED_BITS, causing a loss of precision.
I asked a similar question and I think the answer is NO, if only because texture caches (as an API) use CoreVideo pixel buffers and they don't currently don't support float formats.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With