Construct 2 provides three main expressions for examining performance and memory: fps, cpuutilisation and ImageMemoryUsage. Windows also provides more performance information in Task Manager, which provides another source of data. However there are various complications when looking at these measurements. They can be difficult to interpret correctly, and it's easy to draw incorrect conclusions from them. Here's a few reasons why you need to be careful when using these measurements.
FPS (frames per second) is the most straightforward measurement. It simply tells you how many frames the engine managed to render every second. This is generally the ultimate authority on performance: more FPS means better performance, and that's that, regardless of what any of the other measurements are indicating.
There is however one gotcha with FPS measurements: if nothing is moving, Construct 2 stops rendering frames. This is an optimisation. Consider something like a banner ad or some kind of widget on a web page which is mostly static and only occasionally moves something. If nothing was changing but C2 kept rendering new frames, it would consume CPU and GPU time, spin up fans, and drain the battery. Users often recognise such technologies which are hostile to their device and avoid them. Instead if nothing moves at all, Construct 2 leaves the previous frame showing, and the GPU is not used at all. However it continues to tick the engine and the FPS measurement counts ticks even when a frame is not drawn. So in this case no rendering work is being done, but it will still indicate 60 FPS.
Sometimes users make minimal projects to test some aspect of performance, and this makes it more likely that sometimes nothing is moving. This often shows up as questions on the forum like "When nothing is moving I get 60 FPS, but if I move a single sprite it goes down to 30 FPS. Why is moving a sprite so slow?". It's not that moving a sprite is slow - it's that not rendering anything is fast! To avoid this, make sure something changes every frame. For example the renderperfgl test uses a small sprite in the corner with the rotate behavior to make sure it genuinely redraws every frame, ensuring the FPS measurement accurately represents the rendering work.
Timers are not always perfectly accurate, so you shouldn't trust the cpuutilisation number too much. Treat it as an approximation, and if you're looking to optimise it, look for large changes - small changes could just be timer inaccuracy.
Construct 2's profiler uses the same timer approach to identify which event sheets or event groups are taking the most CPU time. When your scope is as narrow as investigating the performance of events, it's definitely still a useful area to look at to get a rough idea of which areas need optimising. Events run on one thread, so it's not too complicated. There is one complication with the "Draw calls" CPU measurement though: some browsers (like Chrome) basically save the draw calls without really doing anything, and then dispatch this work to another thread to run them in parallel to the JS code. Other browsers actually run the draw calls at the time they are made. This means despite the same work being done, one browser may include the "real" draw calls CPU time in the "Draw calls" measurement, but another browser may not. The overall CPU work is actually the same, but work done on a different thread can't be measured with the timer approach. Draw calls contributes to the overall cpuutilisation and in extreme cases this can make it look like one browser is using far more CPU than another, but really it's just a question of how it runs the draw calls.
Remember in multi-process browsers like Chrome, in Task Manager you need to take in to account all the running Chrome processes and their CPU usages together. If you only look at one process, you may think the CPU usage is lower than it really is. Also Task Manager often reflects all core's activity in one CPU usage number. So on a quad-core system, 100% cpuutilisation corresponds to one busy core, which Task Manager would report as 25% usage. This is another reason you can get different measurements in different places.
Memory is complicated, and probably the most difficult to measure in a useful way. Between OS memory management, multi-process browsers, caching, garbage collection and GPU memory, there's a lot that needs consideration.
Let's start with ImageMemoryUsage. This is an estimate of the total memory used by the currently loaded WebGL textures in the game. Usually textures reside in GPU memory, which will not appear in Task Manager's measurements at all. Games may also need a lot of other memory for everything else other than images. Basically, ImageMemoryUsage is a reasonable indication of GPU texture memory usage, which is just one aspect of the overall memory usage.
Modern browsers are multi-process, so to accurately get the memory usage for a browser you need to take in to account all its processes in Task Manager. Modern operating systems have very sophisticated memory management capabilities, including the possibility for processes to share memory. If a browser has 4 processes and they are all sharing a large amount of memory, depending on the measurements you look at, you may end up counting that memory 4x over. If you want to learn more I found a nice analogy involving sharing crayons.
Operating systems and browsers are also big on caching. Modern systems have so much RAM that it makes sense to basically use it as a giant cache. It's naive to think OSs should try to keep memory free - it's like thinking you should keep your CPU L1 cache as close to empty as well, just in case something else needs to come along and fill it up, but it performs best if it's always fully utilised. Besides it's easy to quickly evict cached memory if another app really needs it. Garbage collection is related to this: it basically keeps filling up memory until it needs to spend CPU time releasing memory, which on a system with gigabytes of memory, may not be necessary for a while.
The implication of this is in many cases a game might appear to be using up more and more memory. We've actually had bugs reported where users are looking at Task Manager and seeing the memory usage going up and up, to maybe 100mb for a small project. Suppose you have 4GB of memory. The game is using about 2.4% of that. Why should it waste CPU time running a garbage collection? Alternatively perhaps the browser or OS have cached various resources that the game needed, and will keep them in case the game needs them again, only releasing them if it starts to run low on memory. In fact even the C2 engine uses caching in many cases, such as growing internal arrays but keeping them at their maximum size, so it adapts to the memory requirements of the game while avoiding having to continually reallocate memory. Altogether, this helps ensure the game stays at best performance, without having to regularly shift things around in memory for little benefit.
You can check for garbage-collection related memory usage in Chrome by opening developer tools, going to the Timeline tab, and clicking "Collect garbage". In at least one case this "fixed" a memory leak bug report. But remember there could be other kinds of caching involved, and there's not such an easy way to clear that from memory short of rebooting your system.
Due to all of this, memory usage measurements are difficult to interpret in a useful way. The main useful question is: does the game crash because it runs out of memory? This is one of the main reasons Construct 2 games crash, often because users are wasting memory, unaware of the ingenious tricks professional developers deploy to produce impressive games with a reasonable memory footprint. For more information on this review our guide to memory usage from the manual. However if your game doesn't outright fail due to running out of memory, it's hard to know what the operating system's memory measurements mean - it could appear to be using a lot of memory, but it's just a whole stack of caching and uncollected garbage. As ever, you just have to test on a range of devices and see if it really works.
Note the ImageMemoryUsage measurement is exempt from all the caching and garbage collection processes, since it measures only WebGL textures that are currently in use. Remember that this is just one aspect of the overall memory usage, though.
The various FPS, CPU and memory measurements are not "simple" numbers - there are several caveats that it is necessary to be aware of to avoid misinterpreting them. The FPS is the ultimate authority: if the game does not run out of memory, and the FPS is good while things are moving, then that's fine. However if not, CPU and memory measurements can provide a guide when investigating why. Just don't trust them too much.