I notice that C2 builds start chugging as desktop resolutions go up, especially if you throw a couple of shader effects in. I have no clue how C2 actually handles scaling, but if I was to guess I'd say separate scaling operations are done on each object in view. Is that correct? Also, seeing as the gpu takes such a massive hit from rendering an upscaled view with shaders I guess shader effects get applied after scaling is done too.
If I'm right, wouldn't it be faster to render everything at 1x1 or 2x2 onto a buffer of some kind, then just upscale the buffer to the target resolution in a single scaling operation instead?
Lots of people have desktop resolutions at 1600x1200 or higher and they will not be impressed if my humble 2D games get outperformed by something like Skyrim on their systems :P
Just a thought. I may be completely wrong about how scaling is done. But still, if C2 builds have to run at desktop resolutions I think some kind of optimization is needed here.