Audio Question: Runtime Analysis

This forum is currently in read-only mode.
  • So I'm curious about AudioSurf-esque runtime audio analysis, e.g. the user imports a audio track, and the game analyses things like the tone, treble, rhythm, pitch to create a dynamic environment.

    You could sync an environment to a track manually with a LOT of work.

    So, two questions:

    1 - Does anyone know anything about audio analysis, a la AudioSurf?

    2 - Can anyone think of a way to implement this into Construct (I would think through python or a plugin, XAudio2 doesn't seem flexible enough)

  • it would have to be thru a plugin

    python cant do anything events cant

  • I was exploring the possibilities of audio analysis a while back too. The only things you can really use are get peak level and RMS level (if your using Xaudio2). However this method won't really work for real songs, as the levels are consantly twitching around and tough to work with. Using a song I created, I split the tracks into separate channels and all played them back simultaneously, that way I have more control with the get peak and RMS levels commands. For pitch and other stuff though, it's not possible with Xaudio2 I'm afraid. The only thing you have to really work with are volume and rythm.

  • I only wish for music position & latency data, like there are for channels.

    that'd allow some nice synching for long music files.

    Edit: am I the only one that dislikes Audiosurf?

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • While it would probably need a new plugin, Python can do a lot that events can't, via additional libraries.

  • Cool. Well, as I have absolutely no experience in Python, I guess this is a feature request now.

Jump to:
Active Users
There are 1 visitors browsing this topic (0 users and 1 guests)