Which is better for CPU?

  • Which is better to do "every 0.3 secs > move 50" or "every 0.03 secs > move 5" for CPU usage. Or is there little difference?

  • The every 0.3 will be more cpu friendly.

    But the 0.03 will be more smooth.

  • But the 0.03 will be more smooth.

    i think 0.03 cant be really smooth because one tick at 60 FPS is 0.0167 seconds, but please correct me if im wrong

    but on topic, its generally less cpu intensive if you call your event less often, for your example it doesnt matter at all how much pixels the image is moved in one action, its the same process for 1 or 50 pixels or even 500, what matters is how often its done

  • > But the 0.03 will be more smooth.

    >

    i think 0.03 cant be really smooth because one tick at 60 FPS is 0.0167 seconds, but please correct me if im wrong

    but on topic, its generally less cpu intensive if you call your event less often, for your example it doesnt matter at all how much pixels the image is moved in one action, its the same process for 1 or 50 pixels or even 500, what matters is how often its done

    with more smooth I meant the movement.

    Instead of jumping 50 pixels every 0.3 seconds, moving just 5 pixels every 0.03 would seem more smooth to a degree.

  • yeah that for sure! youre absolutely wright! didnt want to say it was wrong what you said, just the particular 0.03 seconds seemed to me wont get a real "smoothnes", or am i wrong there?

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • yeah that for sure! youre absolutely wright! didnt want to say it was wrong what you said, just the particular 0.03 seconds seemed to me wont get a real "smoothnes", or am i wrong there?

    Nah your right

    ... although, we could argue and nitpick that with some extra application of dt in the movement section, he could have gotten even smoother

  • yeah thats wright, guess i have to much time for procrastination

  • Optimisation: don't waste your time

    I used to optimize every event and design the logic ground up taking into account weaker & slower JS performance on mobiles.. worked wonders.

    Then I started making an even more complex game, ala. Homeworld complexity, I thought since the target is desktops, let's ignore optimizations and see what happens.. half way through combat fleet AI testing, I ran into 96% CPU use (single threaded) and it stuttered, dropping frames.

    Quickly realizing how bad it was to ignore optimizations of logic ground up, I revisited everything and put in optimizations and its been much better, at a more reasonable ~50% peak CPU usage.

    Fleet battle scenario:

    https://drive.google.com/file/d/0BzblvP ... NseTg/view

    Draw calls takes only 5-10% of the CPU usage, the rest is pure logic.

    It is ALWAYS good practice to optimize logic, regardless of the target device. Thinking it runs 60 fps is good enough is bad because someone with a weaker machine, like notebooks, ultrabooks or older PCs will struggle.

  • - it's entirely sensible to guide optimisations based on measurements. However I would be amazed if anyone could measure any significant performance difference for the question asked in this thread. Changes on that level are really not important.

  • Thanks everyone. Another thing, sprites look blurry when I use linear sampling but point sampling looks rough. I've tried turning pixel rounding on and off.

Jump to:
Active Users
There are 1 visitors browsing this topic (0 users and 1 guests)