1

I've researched this question far and wide, but I can't find any useful answers. Basically, I want to create a translucent (or semi-transparent) audio-reactive overlay which can be transposed onto a generic video file. The idea is to give the video the appearance of pulsating with the audio track.

I think I can achieve this effect with Processing and the minim library, but I don't know how to formulate the sketch. The output should be 1920x1080 and the pulsating overlay should produce a sense of vibrant luminosity (e.g. a light color with 30-50% brightness and perhaps 25-50% opacity).

I'm updating this challenge with the sketch provided by @george-profenza (with modifications to use video instead of cam input):

import processing.video.*;

Movie movie;
PGraphics overlay;

import ddf.minim.*;

Minim minim;
AudioInput in;

void setup(){
  size(320,240);

  movie = new Movie(this, "input.mp4");
  movie.play();

  // setup sound
  minim = new Minim(this);
  in = minim.getLineIn();

  // setup overlay
  overlay = createGraphics(width,height);
  // initial draw attributes
  overlay.beginDraw();
  overlay.strokeWeight(3);
  overlay.rectMode(CENTER);
  overlay.noFill();
  overlay.stroke(255,255,255,32);
  overlay.endDraw();
}

void draw(){

  //update overlay based on audio data
  overlay.beginDraw();
  overlay.background(0,0);
  for(int i = 0; i < in.bufferSize() - 1; i++)
  {
    overlay.line( i, 50 + in.left.get(i)*50, i+1, 50 + in.left.get(i+1)*50 );
    overlay.line( i, 150 + in.right.get(i)*50, i+1, 150 + in.right.get(i+1)*50 );
  }
  overlay.endDraw();
  //render video then overlay composite
  image(movie,0,0);
  image(overlay,0,0);
}
// update movie
void movieEvent(Movie m){
  m.read();
}

Presumably this sketch works, but unfortunately, the underlying processing.video (GStreamer 1+) library seems to be malfunctioning on Ubuntu (and there doesn't appear to be a way to update the library with one of the community provided forks, according to issue #90 on GitHub.

If anyone can suggest a way to fix this problem or has another solution, I'd be appreciative.

Lichtung
  • 133
  • 2
  • 12

2 Answers2

1

That's a wide question. I'll cover a few aspects:

  1. translucent (audio-reactive) overlay: look into PGraphics. It's like layers in Processing. You can draw into PGraphics (with translucency, etc.), then render in what order you want. See commented example bellow
  2. audio-reactive: you can use minim is use loudness, FFT data or some other software that can do more advanced audio analysis from which you can export data for Processing to read.
  3. 1920x1080 output: In my personal experience, at the time of this writing I had the surprise of seeing ok, but not super crisp 1080p video playback within Processing (I would experience staggering every once in a while, tested on a macbook with 16GB RAM and on a PC also with 16GB RAM). Doing sound analysis and overlay graphics on top may degrade performance even further, the main issue being sync between audio and composited graphics, that is you want to do this realtime.

If you simply want to output a video that has beautiful generative audio-responsive graphics, but doesn't need to be in real-time I recommend taking a more "offline" approach:

  • pre-analysing the audio data so only what you need to drive the visuals is there (can be as simple as loudness)
  • prototyping visuals at low-res with realtime audio and no video to see if looks/feels fine
  • rendering the video + visuals (with sound mapped properties) frame by frame at 1080p to then render with audio synched (can be with After Effects, ffmpeg, etc.)

For reference here is a very basic proof of concept sketch that demonstrates:

  • using overlay graphics
  • updating overlay graphics to be audio reactive (Minim MonitorInput sample)
  • compositing video + overlays

Note the low-res video size.

import processing.video.*;

Capture cam;
PGraphics overlay;

import ddf.minim.*;

Minim minim;
AudioInput in;


void setup(){
  size(320,240);

  // setup video (may be video instead of webcam in your case)
  cam = new Capture(this,width,height);
  cam.start();

  // setup sound
  minim = new Minim(this);
  in = minim.getLineIn();

  // setup overlay
  overlay = createGraphics(width,height);
  // initial draw attributes
  overlay.beginDraw();
  overlay.strokeWeight(3);
  overlay.rectMode(CENTER);
  overlay.noFill();
  overlay.stroke(255,255,255,32);
  overlay.endDraw();
}

void draw(){

  //update overlay based on audio data
  overlay.beginDraw();
  overlay.background(0,0);
  for(int i = 0; i < in.bufferSize() - 1; i++)
  {
    overlay.line( i, 50 + in.left.get(i)*50, i+1, 50 + in.left.get(i+1)*50 );
    overlay.line( i, 150 + in.right.get(i)*50, i+1, 150 + in.right.get(i+1)*50 );
  }
  overlay.endDraw();
  //render video then overlay composite
  image(cam,0,0);
  image(overlay,0,0);
}
// update video (may be movieEvent(Movie m) for you
void captureEvent(Capture c){
  c.read();
}
George Profenza
  • 50,687
  • 19
  • 144
  • 218
  • Haven't found anything to simulate the cam input, so I modified the sketch to read: `Movie movie;` instead of `Capture cam;`, `movie = new Movie(this, "path/to/video.mp4");` instead of `cam = new Capture (this, width, height);`, `movie.play();` instead of `cam.start();`, and `image(movie,0,0);` instead of `image(cam,0,0);`. However, this errors out with the message "A library used by this sketch is not installed properly". Something about an "UnsatisfiedLinkError", something about a library that "relies on native code that's not available", and something about running a "32-bit application". – Lichtung Feb 04 '19 at 21:37
  • Also tried `void movieEvent(Movie m){ m.read(); }` and `void movieEvent(Movie movie){ movie.read(); }` instead of `void captureEvent(Capture c){ c.read(); }` as you suggested, but this errors out with the same message. – Lichtung Feb 04 '19 at 21:44
  • FWIW the line producing the error is `movie = new Movie(this, "/path/to/video");`. Not sure why this is causing the issue. All relevant libraries should be installed. – Lichtung Feb 04 '19 at 22:05
  • It appears the underlying GStreamer library is unstable or no longer supported on Ubuntu: https://github.com/processing/processing-video/issues/90. Dead end for me, I'm afraid. – Lichtung Feb 04 '19 at 22:33
  • @Introspectre You're bang on with switching to `movieEvent` from `captureEvent`, etc. I haven't used the Processing video library on Ubuntu in ages and didn't realise if has such issues. Maybe also try the [processing-glvideo](https://github.com/gohai/processing-glvideo) library. If neither work and you want to use Processing you can convert your image sequences instead (keeping sound separate). Alternatively you can try other creative coding toolkits like [OpenFrameworks](https://openframeworks.cc/download/)(c++ but inspired by Processing) or [cinder](https://github.com/cinder/Cinder)(c++)... – George Profenza Feb 05 '19 at 10:39
  • ...additionally you might want to try [PureData](https://puredata.info/) which is great at audio processing and [Gem](http://gem.iem.at/) for video / graphics, but bare in mind it's a language that uses a graph/node based visual programming paradigm (as opposed to Java/c++). Worst case scenario you can do the bulk of the work in a video editing/compositing software and use Processing/etc. for generative elements you can import/edit back on top of the video. – George Profenza Feb 05 '19 at 10:43
0

First of familiarize yourself with writing a video file, you'll need to save the output somehow. Than make sure you can read the file all right. Than you'll need access to the video file's audio (?) unless you want to use audio from the microphone. Transparent overlay is easy just paint with less alpha

zambari
  • 4,797
  • 1
  • 12
  • 22