Reduce Lag, higher FPS

0 favourites
  • 13 posts
From the Asset Store
Create a game inspired by great arcade classics as Operation Wolf
  • So my project that im working on is minecraft 2d. I have a random terrain generator, but it lags a lot, and i only have 1-5 FPS. Can anyone tell me how to make the FPS higher and reduce the lag. Thank You!

  • To avoid performance issues from lots of tiles I paste the tiles into a canvas, then load the image to a large empty sprite, then destroy all the tiles & canvases..not sure you'd be able to do that for a minecraft clone though, since tiles are destructible. Might look into chunks. Disabling off-screen solids might help too.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • You might try making sure that only things that are on-screen are being rendered and accounted for (I don't think anything like that is built into Construct though).

    You could also try some of the tips in this tutorial on optimizing performance: scirra.com/tutorials/298/performance-tips-for-mobile-games

  • Tokinsom - that is a great idea, passing the canvas to a sprite so it doesn't hit the frame rate in webGL mode.

    rabidsheep - graphics cards automatically don't render offscreen objects.

  • Offscreen objects are still rendered, they are just never bit blitted to the screen. Essentially, everything is rendered into one giant canvas, and then the graphics drivers cuts down the image to the window that is viewable, and then that is placed onto the screen. The resources are still being wasted on drawing the initial objects onto the large canvas though, so if you can implement logic to prevent them from being drawn to that initial canvas, you will increase your program's performance.

  • Because I thought this was all rather interesting, I created a small test to determine how much of a performance hit is taken generating objects on-screen as opposed to off-screen.

    This test has two options; both use identical events to generate one of each of three sprites every tick and displays the time, frame rate and sprite count. One test only creates the sprites on-screen, and the other uses the entire layout. The tests will give you the exact count of sprites at which the frame rate reaches 30.

    You can try it yourself here. I don't know how scientific this test really is, and I'd be interested to get some feedback on its accuracy as I think it could have an effect on how layouts are designed for games that depend on a smooth frame-rate.

    For reference, on my machine, test 1 takes 8310 sprites to get to 30fps. Test 2 takes 11043 to get to 30fps

  • rabidsheep - everything I've heard from people in the know indicates that is incorrect. GPUs are extremely optimized as chip makers try to get every bit of performance out of them as they can. There's no reason at all for a card to draw offscreen pixels as it's a complete waste of resources (unless you're doing something tricky intentionally like the what Ashley does to get text rendering in webGL by drawing text to an offscreen canvas then converting it into a texture that webGL can use).

    You can demonstrate this by making a layout and sprite 1,000,000,000 pixels wide and tall. That would require the graphics card to process 1,000,000,000,000,000,000 pixels per frame (which no current card could handle) and I still get 60 fps.

    Unless I'm misunderstanding your terminology somehow. I'm not sure how you consider rendering different from blitting to the screen.

    If you think they're still being rendered because you're noticing the fact that simply having tens of thousands of offscreen objects in the layout affects the performance, that's not because of rendering, that's because objects have a little bit of overhead code-wise even if they're not doing anything at all, and they still get processed by collision and variable checks and such.

  • Arima - Rendering is the creation of an image from either another image or a model :http://en.wikipedia.org/wiki/Rendering_(computer_graphics) and blitting is the combination of multiple images into one (most often used to take all of the images in a scene and move them to the single image that is shown on the screen) : en.wikipedia.org/wiki/Bit_blit

    It seems that the people that you know are confusing a Graphics Card with a Graphics Driver. The graphics card itself simply handles all of the processing that is given to it during the rendering stage, which then gives that information back to the main process, where the graphics drivers blit them to the screen. Graphics Cards are optimized to the fullest extent possible, but they do not clip any screens that are given to them, as it is not their task to do so (since that would make it necessary to have a graphics card within your computer). When they pass the images that need blitted to the Graphics Drivers, they are then cut down to fit the viewing screen.

    This means that the Graphics Card is still doing all of the calculations that it might need, unless you pre-trim your viewing screen.

    Check out 3D Game Engine Design by David H. Eberly (http://bit.ly/Z0quBr), or Berkeley has a free online class that touches on the topic (https://www.edx.org/courses/BerkeleyX/CS184.1x/2013_Spring/about).

  • Thank you for replying, i will check out the link right now:D

  • I've read up on it, as as far as I understand it, that's still incorrect, as I don't think C2 is blitting at all, I'm almost 100% positive it's billboarding instead (using flat texture mapped camera facing polygons). Even from the link you posted, it describes rendering being the process of having a scene file that is converted into a raster image, the drawing of the pixels themselves being a part of the process. I've rendered stuff in plenty of 3d packages, and none of them have ever wasted time rendering offscreen pixels, which would be a completely pointless waste of time, and I showed that C2 does not do that in my example above.

    It's my understanding that first, a scene 'file' is generated that contains all of the vertex information for all of the objects in the scene (Ashley reuses the data generated for collision polygons as an optimization). Then that data is used to draw the screen from, drawing the parts of each object that are on screen back to front. This can cause overdraw by drawing the same pixel multiple times if that pixel is overlapped by multiple objects. While the vertex information for offscreen objects is still there, it isn't used for the actual drawing of the image itself because the graphics card only draws what is inside the screen.

    I could easily believe that the graphics driver would cut down the undrawn vertex information to only what objects are on screen, which would make sense, but the drawn image itself? No. Cards have limits to their pixel fill rates and drawing offscreen pixels would be terribly wasteful. Also, the graphics driver or card having to check if the geometry is onscreen is a calculation that the driver can likely do far faster automatically than anything that could be done to try to optimize with JavaScript while still having the objects available to process in code/events.

    In practice, it doesn't matter even if either of our understanding is incorrect. Not even minecraft draws things out into infinity - it, like games all the way back to the nes and earlier, create/generate and destroy/store objects based upon distance because even writing the most optimized assembly possible wouldn't be enough to overcome the fact that computers have limits. If someone wants to have a million objects in a layout like Bobofet, then they should implement a dynamic system to load and destroy objects as they get close to/far from the screen, same way everybody else does it.

    Bobofet - if you're using a grid like minecraft, saving/loading the instances to/from an array would work great for that. Also, you might want to do a distance check for the tiles you're checking for collisions with to reduce the number of collision checks. Also, tiled objects render faster than lots of small sprites taking up the same space, or as Tokinsom suggested, paste the tiles to canvas then load the canvas image into a sprite.

    Rexrainbow was actually talking about possibly making a plugin that would load/create or save/destroy instances based on distance automatically - I reccommend bugging him for it, because I want that plugin too. :)

  • I would just like to officially confirm that objects off-screen are not rendered at all. This seems to be such a common misconception I think I'll add it to the manual.

    For the record, here's how it really works:

    • if any part of an object is on-screen, the CPU issues a command to tell the GPU to draw it. If it's offscreen, the GPU never even knows it exists.
    • the GPU only draws objects that are on-screen, but is still smart enough to only draw part of an image if it goes offscreen. So if you have a tiled background the size of your layout, the GPU knows only to draw the part of it that is in the window. So off-screen content is still not rendered.
    • even if Construct 2 issued draw commands for objects that are off-screen, the GPU is smart enough to just completely ignore it, because it can tell it doesn't appear in the window. However this is a waste of CPU time issuing a pointless draw call, so pretty much every engine that has ever existed will simply not issue draw calls for stuff that is off-screen.

    You could probably measure a really tiny performance impact from having lots of objects off-screen (like, tens of thousands). This is only because Construct 2 is checking if they are off-screen and then deciding to ignore them, which is such a tiny amount of work it usually doesn't factor in to the framerate at all. But they're definitely not being drawn.

  • Thanks for clearing that up, Ashley. Definitely a good idea to add it to the manual.

  • please forgive me for being so dumb, but I need clarification one more subject related to this one. Made a search on the tutorials and forums but it was not a deep search. Here goes:

    Do behaviours like solid or phsycis still try to calculate for objects out of the screen and how can I disable this if I need to? Imagine there is an enemy on the other end of my level which is a couple screen widths away, and I don't want construct to check collision of that enemy with my player. I understand it's not drawn but since all other logic keeps working for out-of-screen objects, things like collision checks will eat some cpu power, right? I don't know if the perf hit is worth the effort to do such a thing...

    Quick edit: So I want the collision checks to be made with a distance condition. (or on screen condition)

    Thanks in advance :)

Jump to:
Active Users
There are 1 visitors browsing this topic (0 users and 1 guests)