It looks like you're defining light as the transpose of the modelviewprojection matrix instead of the inverse transpose of the modelview matrix. That's probably your issue. Also in case you didn't notice, you're setting your camera to (0, 0, 0) instead passing it in as a uniform.

Also view space normals are one of the main benefits from deferred rendering as lighting goes from a O(M * N) operation to an O(M + N) operation, where M are rendered objects and N are lights. Normal mapping among several things become much easier to do.

You can store depth as the alpha channel of another buffer if you're not already doing that, but there's no way of being able to infer any data about normals from depth or vice-versa.

Thanks for replying, this was actually really helpful.

Whoops, completely forgot I was hard coding the camera position. Also turns out both my normals and my positions from depth were completely wrong.

I've fixed those and everything's much better now, but I can't get the light's to view space properly. Multiplying it by the inverse transpose of the modelview matrix makes the light follow the direction of the camera.

I've also tried sending the shader a uniform of the modelview matrix after applying camera transformations and multiplying the light's position by the inverse transpose of that, which yields better results where the position stays the same except the intensity of the light on different surfaces varies greatly as the camera rotates.

For example...

Panned slightly to the left:

From the first to the second picture, the position of the camera does not change, only its rotation.

It's fairly easy to see on the cube as well. If you angle it right some of its faces can go completely into darkness.

Same camera position, panned left:

All I can think is that I'm rotating the light wrong, as everything else seems to work fine now.

GLSL Shader:

Code:

#version 120
#extension GL_ARB_gpu_shader5 : enable
uniform sampler2D tDiffuse;
uniform sampler2D tNormals;
uniform sampler2D tDepth;
uniform mat4 WorldMatrix;
uniform vec3 LightPos;
uniform vec2 bufferSize;
uniform vec2 camPlanes;
void main( void )
{
vec4 color = texture2D( tDiffuse, gl_TexCoord[0].xy );
vec4 normal = texture2D( tNormals, gl_TexCoord[0].xy );
vec4 depth = texture2D( tDepth, gl_TexCoord[0].xy );
vec3 position = vec3(((gl_FragCoord.x/bufferSize.x)-0.5) * 2.0, ((-gl_FragCoord.y/bufferSize.y)+0.5) * 2.0 / (bufferSize.x/bufferSize.y), camPlanes.x / (camPlanes.y - depth.x * (camPlanes.y - camPlanes.x)) * camPlanes.y);
position.x *= position.z;
position.y *= -position.z;
vec3 lightDir = position - (vec4(LightPos,0) * inverse(transpose(WorldMatrix))).xyz;
//vec3 lightDir = position - (vec4(LightPos,0) * inverse(transpose(gl_ModelViewMatrix))).xyz;
normal.xy = normal.xy * 2 - 1;
normal.z = sqrt( 1 - dot( normal.xy, normal.xy ) );
vec3 norm = vec3(normal.xyz normal.z);
float attenuation = 1.0 * clamp( dot( norm,normalize(vec3(-lightDir.x,-lightDir.y,lightDir.z))),0.0,1.0 );
gl_FragColor = color * attenuation;
}

Creating the WorldMatrix uniform in C++:

Code:

glPushMatrix();
glRotatef(xrot,1.0,0.0,0.0);
glRotatef(yrot,0.0,1.0,0.0);
glTranslated(-xpos,-ypos,-zpos);
float worldMatrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, worldMatrix);
glPopMatrix();
glUniformMatrix4fvARB ( glGetUniformLocationARB(deflight.id(),"WorldMatrix"), 1, false, worldMatrix);

Really not sure what to do. This really shouldn't be as hard as I seem to be making it. Any ideas? I'd really appreciate any help I can get, this is driving me insane.