I see that you're going through desktop image bitmap pixels to calculate luminosity. I wonder if it would be more efficient to downscale the image in graphics memory first. Then analyze tiny interpolated image instead of bringing the whole image to RAM every second.