Thinking of returning to C2 - some questions

0 favourites
From the Asset Store
Game "Little Dino Adventure Returns" with complete Source-Code (Construct3 / .c3p) + HTML5 Exported.
  • Not to go off ops topic too much

    It's been awhile since there was a C2 game jam.

    Maybe a platformer jam would help figure out any issues.

    I love this guy.

  • In native-based engines you have much more control over resolution, so we can render the effects and things on a smaller screen space, then use one clean up-scale render-to-texture to make that "HD" using a simple one-pass shader.

    OK, but that's more of a feature, not an inherent limitation of WebGL. We could do something like that, because we can do everything a native app can do with OpenGL ES 2 (soon 3). WebGL does not prevent that.

    [quote:2pwyj2mf]The problem here is that it's really more than just Construct 2 being so frame-dependent (despite use of dT like crazy)

    You can turn off framerate dependence if you want, or adjust the dt cap to limit frame skipping, by changing the minimum framerate. But if you disable it completely then you have to deal with the game running at different rates depending on the display refresh rate, e.g. it will go twice as fast on a 120Hz gaming monitor, which I've always thought was a worse result.

    [quote:2pwyj2mf]Support doesn't mean it's fully working. A laptop from 2006 might have a version of Chrome that "supports" WebGL, but it'll run DirectX 9 faster than it every time.

    Do not assume native DirectX or OpenGL are perfect for everyone either. I've done a lot of native coding with both DirectX 9 (Construct Classic) and OpenGL (C2 editor) and there is a total minefield of horrible driver bugs, crashes, and even poor performance. The situation is so bad it can even ruin major game releases. The GPU driver situation is totally awful and that affects the whole industry, not just WebGL, HTML5, or Construct 2. I know people get annoyed when I point the finger at other companies like those responsible for the terrible GPU drivers, but it really is that bad and everyone in this industry is dealing with it.

    [quote:2pwyj2mf]Unity has the funds and staff to do extreme optimization on both native and WebGL, and here's their results:

    https://blogs.unity3d.com/2014/10/07/be ... -in-webgl/

    Despite the title, that blog appears to mainly be testing CPU performance with Unity's WebGL exporter, which means it's basically running asm.js and WebAssembly vs. native code. Their own analysis says: "When you are mostly GPU-bound, you can expect WebGL to perform very similar to native code."

    Even then, other games, both 2D and 3D, made in other engines, perform significantly better on Intel embedded GPUs than games made in C2. Whether that's due to usage of WebGL or not is debatable (I say probably not)

    Maybe you're actually testing CPU-bound games where the GPU performance doesn't matter. That explains why you could see differing performance even on a system with a super-powerful GPU. CPU performance is another topic entirely. My point is that since there is a 1:1 mapping between WebGL calls and OpenGL calls, the GPU performance should be identical to a native app making the same calls. There's no reason for it not to be. Maybe different engines do some special optimisations or something, but there's nothing stopping us doing that in WebGL as well, since the same features are there. So it's not actually WebGL's fault or some fundamental limitation of HTML5.

    If you have a CPU-bound game, as I always offer, send it to me and I'll profile it and see if the C2 engine can be improved.

  • > In native-based engines you have much more control over resolution, so we can render the effects and things on a smaller screen space, then use one clean up-scale render-to-texture to make that "HD" using a simple one-pass shader.

    >

    OK, but that's more of a feature, not an inherent limitation of WebGL. We could do something like that, because we can do everything a native app can do with OpenGL ES 2 (soon 3). WebGL does not prevent that.

    Oooh okay then. Can we have it? As far as I can tell resolution is the one big GPU-bottleneck for integrated chips, and as others have pointed out, intel integrateds are fairly widespread outside the hardcore gamer segment.

  • > In native-based engines you have much more control over resolution, so we can render the effects and things on a smaller screen space, then use one clean up-scale render-to-texture to make that "HD" using a simple one-pass shader.

    >

    We could do something like that, because we can do everything a native app can do with OpenGL ES 2 (soon 3).

    That would be incredibly helpful, especially if later on we get the control to actively resize the resolution of the desktop (which is a dying trend, but is sometimes the only hope for the many gamers on older hardware on Steam).

    You can turn off framerate dependence if you want, or adjust the dt cap to limit frame skipping, by changing the minimum framerate. But if you disable it completely then you have to deal with the game running at different rates depending on the display refresh rate, e.g. it will go twice as fast on a 120Hz gaming monitor, which I've always thought was a worse result.

    The gamers who leave the most negative reviews on my title on Steam are not the 120Hz monitor gamers. I do like the minimum framerate option that was finally added (as slowing the game down with accurate collisions is 100% more useful in a platformer, it lets us say "Hey if it's slow, that's your computer!" rather than "The magic box we bought and built our game on decided you don't need to stand on that ground", to which we will of course still receive some angry reviews saying that a "retro game should run on a toaster", but that's again the issues of GPU-blacklisting + CPUs that really can't run JavaScript.

    Do not assume native DirectX or OpenGL are perfect for everyone either. I've done a lot of native coding with both DirectX 9 (Construct Classic) and OpenGL (C2 editor) and there is a total minefield of horrible driver bugs, crashes, and even poor performance.

    I agree native can be the root of the problem in many situations, but this current "SEGA tower of power" kind of workflow (C2 engine > HTML5 > Node-Webkit/NW.js > Chromium > OS > Graphics Driver > GPU) could really stand with something more dedicated as a game engine coming in and replacing the NW.js and Chromium steps, even if it doesn't come from Scirra specifically. I guess we'll have to wait and see how the tech goes there.

    Maybe you're actually testing CPU-bound games where the GPU performance doesn't matter. That explains why you could see differing performance even on a system with a super-powerful GPU. CPU performance is another topic entirely. My point is that since there is a 1:1 mapping between WebGL calls and OpenGL calls, the GPU performance should be identical to a native app making the same calls. There's no reason for it not to be. Maybe different engines do some special optimisations or something, but there's nothing stopping us doing that in WebGL as well, since the same features are there. So it's not actually WebGL's fault or some fundamental limitation of HTML5.

    Agreed, there are likely many situations (aside from GPU-blacklisting) where CPU bottleneck is the entire issue. However, I would argue that could still be seen as a limitation of HTML5, simply due to the overhead and memory management issues inherent in JavaScript. If you had performed the optimizations you have done to C2 (which are pretty amazing considering again that it's JavaScript ) to CC then you would have much different results than the latest benchmarks comparing against "native Construct Classic". Even 10% less CPU performance could be the difference between the customers with low hardware on Steam complaining of falling through floors, or merely having a few lag spikes at intense moments.

    That said, I can only imagine something like *maybe* asm.js coming in to save the day there, as you've already done many optimizations to C2 these past couple years.

  • > in the form of a side-scrolling 2D platformer

    >

    2D platformers seems to be the weakest point of C2, especially if making a game where the enemies are also using platforming behaviors and if there are many (eg: 5+) enemies alive at a time/on screen. Eats up the JavaScript performance and leads to missing collisions on average or lesser machines (which feels like a large portion of audiences who purchase 2D games on the desktop/Steam).

    Also, screen capture software still tends to wreak total havoc in the games, causing further missed collisions and engine issues you won't easily avoid, so social spread of the game might be negatively impacted.

    But as a side project? I agree with glerikud that it should work alright on desktop aside from the above (Windows specifically, never had much luck with Mac/Linux for our game).

    I've always wondered why I've ran into that issue when making my games. I'll have to remember that in the future.

  • >

    > > In native-based engines you have much more control over resolution, so we can render the effects and things on a smaller screen space, then use one clean up-scale render-to-texture to make that "HD" using a simple one-pass shader.

    > >

    > We could do something like that, because we can do everything a native app can do with OpenGL ES 2 (soon 3).

    >

    That would be incredibly helpful, especially if later on we get the control to actively resize the resolution of the desktop (which is a dying trend, but is sometimes the only hope for the many gamers on older hardware on Steam).

    Are you aware that the existing "low quality fullscreen mode" is effectively identical to switching resolution on a modern display? The only difference is the last-step stretch to physical resolution is done by the GPU rather than a chip in the display, and the performance impact on the GPU ought to be minimal. So you can already get low-res rendering for improved performance by setting a small viewport size and using low-quality fullscreen mode.

    Beyond that it's possible to add a shader where you can turn down the quality of individual shader effects, which can be useful. However I suspect it would only help with certain intensive shaders like blurs. It also has the trade-off of (obviously) reducing the visual quality. So something like screen blend is typically very fast (no extra fill rate on top of normal rendering, it's 1:1 to normal rendering but with a different calculation), and as you turn down the quality, it gets noticably blurry/pixellated. On the other hand a low-quality blur can still look good and save performance. So, which effects are you using heavily and are you sure they will be improved by such a quality setting?

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • Ashley Ah yes, our game did take advantage of that mode, which helped a bit.

    Not sure of which specific effects now, as we've retired the game and moved its source off our storage system into archives, but the GPU, aside from upscaling, was doing very little beyond sprites with possibly a few blur shader or glow effects here and there.

    The low-res graphics and "native" resolution (somewhere around original NES resolution) of the game meant GPU should not have been a big issue (aside from blacklisted or underpowered GPUs on integrated chipsets).

  • Blurs and glows are by far the most GPU-intensive effects in the C2 engine. It sounds like an effect-specific quality setting could help a lot there. I'd much appreciate a demo .capx (even something simple created from memory) to see what kind of thing you were working with so I can understand what the engine ought to do to optimise for that.

Jump to:
Active Users
There are 1 visitors browsing this topic (0 users and 1 guests)