Is there anyway to predict the size of a PNG file on memory?

0 favourites
From the Asset Store
220 Food Sprites in 16x16 pixel size. Perfect for items for a retro style game.
  • Modern graphics cards don't have any of these restrictions anymore. They deal with non-power-of-two textures natively, and graphic languages like OpenGL (which WebGL is based on) also support NPOT textures.

    I'm not actually sure to what extent this is really true, when it comes to memory use. OpenGL specifies non-power-of-two support in terms of rendering behavior, and AFAIK it's perfectly spec-compliant for a GPU to actually allocate a power-of-two texture, paste in a non-power-of-two image, remember the size of the image, then act as if the texture was really that size (such as by calculating texture co-ordinates relative to the image size instead of the surface size). So I don't know if "NPOT support" actually always translates in to "memory efficient NPOT textures". On top of that mobile GPUs tend to be simpler/more limited so do have NPOT limitations, and I think the square power-of-two limitation does still apply on some hardware as well. Even if non-square power of two-edges is supported, it's still hard to tell that's not using an in-memory power-of-two surface without knowing what the driver is doing.

    I don't know all the answers here, so Construct 2 errs on the side of caution and spritesheets on to square power-of-two surfaces to ensure no matter what the driver pretends to support, there is minimal wasted GPU memory.

    WebGL 2 (based on OpenGL ES 3) also in theory gets non-power-of-two texture support, but again it's still not obvious this also means "memory efficient". It means we get mipmaps for any size image which is nice, but that's all I would count on.

    Compressed textures are a tricky area for a different reason: they are usually far less efficient at compression than PNG and JPEG, which means the compressed format has to be encoded from Javascript or the browser. No browsers support this yet, and doing it in Javascript means running in to patent issues on encoding some non-free formats, is probably slow, and there's still not one format that works everywhere. Also they are often lossy, meaning it can garble nice 2D artwork (they're designed for 3D engines where distance helps hide artefacts). So it's unlikely to be supported any time soon.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • I'm not actually sure to what extent this is really true, when it comes to memory use. OpenGL specifies non-power-of-two support in terms of rendering behavior, and AFAIK it's perfectly spec-compliant for a GPU to actually allocate a power-of-two texture, paste in a non-power-of-two image, remember the size of the image, then act as if the texture was really that size (such as by calculating texture co-ordinates relative to the image size instead of the surface size). So I don't know if "NPOT support" actually always translates in to "memory efficient NPOT textures". On top of that mobile GPUs tend to be simpler/more limited so do have NPOT limitations, and I think the square power-of-two limitation does still apply on some hardware as well. Even if non-square power of two-edges is supported, it's still hard to tell that's not using an in-memory power-of-two surface without knowing what the driver is doing.

    I understand quite well that regarding C2 it is better to trust on a known working way that is true for all platforms/gpus. It's a shame that gpu vendors don't publish inner workings. I tried to find something about it to no avail.

    However, the oldest graphic card I can talk of is the GTX 460. It is almost 5 years old. From tests I made I know that it supports non-squared POTs natively. No memory is wasted. Assuming that NVidea wouldn't change the driver's behaviour to the worst for newer cards, at least non-squared POTs are supported for all NVidea GPUs in the last 5 years. I then just assumed that AMD and Intel wouldn't want to fall behind.

    The rendering support is a strong argument. Indeed you can't tell if supporting them does also mean storing them memory efficient.

  • GPU details are available to registered developers ; which unfortunately excludes hobbyists and most sole/indie programmers.

    Maybe check the DirectX SDK docs, they're usually full of general "good advice" and up-to-date "best practices" when it comes to managing graphics resources. Obviously it doesn't cover engine specifics (for example the padding here already mentioned by Ashley), but it gives some understanding of the inner working.

    Though that's only true on PC... For mobile chips, good luck understanding anything at all ; between the Android and Apple, the PowerVR and whatnot, they're all different even across the same family.

Jump to:
Active Users
There are 1 visitors browsing this topic (0 users and 1 guests)