Processing was designed to make drawing with Java much easier. Processing for Android has the power of its desktop sibling plus information from sensors. Putting these things together, shouldn't it be easy to display a stereoscopic image and move around it like Oculus Rift or Google Cardboard?
2 Answers
The code below displays an image in two viewports - one for the left eye and one for the right eye. The result is that the image looks 3D when viewed from a Google Cardboard device. Accelerometer and gyroscope data are used to move the 3D image as the head is moved around. The only bug is that of Processing for Android in that Landscape mode makes the program crash if you do not start it in this mode. I am using Processing 2.0.3 and Android 4.3, so this problem may have been addressed in current versions. (Although I did see it was still an open issue in Processing-Bugs discussion on Github). The texture image is a 100 x 100 pixel image of a favorite cartoon character. You can use whatever you want – just store the image in the data folder.
//Scott Little 2015, GPLv3
//pBoard is Processing for Cardboard
import android.os.Bundle; //for preventing sleep
import android.view.WindowManager;
import ketai.sensors.*; //ketai library for sensors
KetaiSensor sensor;
float ax,ay,az,mx,my,mz; //sensor variables
float eyex = 50; //camera variables
float eyey = 50;
float eyez = 0;
float panx = 0;
float pany = 0;
PGraphics lv; //left viewport
PGraphics rv; //right viewport
PShape s; //the object to be displayed
//********************************************************************
// The following code is required to prevent sleep.
//********************************************************************
void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
// fix so screen doesn't go to sleep when app is active
getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
}
//********************************************************************
void setup() {
sensor = new KetaiSensor(this);
sensor.start();
size(displayWidth,displayHeight,P3D); //used to set P3D renderer
orientation(LANDSCAPE); //causes crashing if not started in this orientation
lv = createGraphics(displayWidth/2,displayHeight,P3D); //size of left viewport
rv = createGraphics(displayWidth/2,displayHeight,P3D);
PImage img = loadImage("jake.jpg"); //texture image
s = createShape();
TexturedCube(img, s, 50, 50);
}
void draw(){
//draw something fancy on every viewports
panx = panx-mx*10;
pany = 0;
eyex = 0;
eyey = -20*az;
ViewPort(lv, eyex, eyey, panx, pany, -15); //left viewport
ViewPort(rv, eyex, eyey, panx, pany, 15); //right viewport
//add the two viewports to your main panel
image(lv, 0, 0);
image(rv, displayWidth/2, 0);
}
//sensor data
void onAccelerometerEvent(float x, float y, float z){
ax = x;
ay = y;
az = z;
}
void onGyroscopeEvent(float x, float y, float z){
mx = x;
my = y;
mz = z;
}
//
void ViewPort(PGraphics v, float x, float y, float px, float py, int eyeoff){
v.beginDraw();
v.background(102);
v.lights();
v.pushMatrix();
v.camera(x+eyeoff, y, 300, px, py, 0, 0.0, 1.0, 0.0);
v.noStroke();
//v.box(100);
v.shape(s);
v.popMatrix();
v.endDraw();
}
//put a texture on PShape object, 6 faces for a cube
void TexturedCube(PImage tex, PShape s, int a, int b) {
s.beginShape(QUADS);
s.texture(tex);
// +Z "front" face
s.vertex(-a, -a, a, 0, b);
s.vertex( a, -a, a, b, b);
s.vertex( a, a, a, b, 0);
s.vertex(-a, a, a, 0, 0);
// -Z "back" face
s.vertex( a, -a, -a, 0, 0);
s.vertex(-a, -a, -a, b, 0);
s.vertex(-a, a, -a, b, b);
s.vertex( a, a, -a, 0, b);
// +Y "bottom" face
s.vertex(-a, a, a, 0, 0);
s.vertex( a, a, a, b, 0);
s.vertex( a, a, -a, b, b);
s.vertex(-a, a, -a, 0, b);
// -Y "top" face
s.vertex(-a, -a, -a, 0, 0);
s.vertex( a, -a, -a, b, 0);
s.vertex( a, -a, a, b, b);
s.vertex(-a, -a, a, 0, b);
// +X "right" face
s.vertex( a, -a, a, 0, 0);
s.vertex( a, -a, -a, b, 0);
s.vertex( a, a, -a, b, b);
s.vertex( a, a, a, 0, b);
// -X "left" face
s.vertex(-a, -a, -a, 0, 0);
s.vertex(-a, -a, a, b, 0);
s.vertex(-a, a, a, b, b);
s.vertex(-a, a, -a, 0, b);
s.endShape();
}

- 18,866
- 8
- 51
- 70
-
I found a solution to the orientation problem by removing the orientation(LANDSCAPE) line and adding android:screenOrientation="landscape" in the XML file as described here: http://forum.processing.org/two/discussion/157/landscape-with-opengl-renderer-on-nexus-4#Item_5. – scottlittle Mar 02 '15 at 03:07
-
1Hi. Do you have any github repo for this ? – Tejas Mar 25 '15 at 08:32
-
I'm new to cardboard. I'm unable to use above code to display this stereographic image. Can you provide any sample project or some of the linked class to use your code? I checked the github link as well but there is nothing other then this same code. It will be helpful if you provide some links or references to sample projects. thanks – Amrut Bidri Apr 07 '15 at 05:40
-
Everything to make it work is on Github. Dependencies that I did not explicitly mention include the Ketai library and its dependencies. If you have OS X Mavericks, I'd recommend using Processing 2.0.3. You might want to use that version even if you don't have a Mac. Also, I should mention that this is not Cardboard, but it works on Cardboard devices. – scottlittle Apr 07 '15 at 16:02
-
@scottlittle Thanks I have to do it on Android. What should i do to kickstart for developing Cardboard app for Android? Any suggestions would help? – Amrut Bidri Apr 16 '15 at 11:16
-
If you want to make a professional quality Cardboard app, I'd say go with Unity. – scottlittle Apr 16 '15 at 17:46
It will be easy to display a bad stereoscopic image. There are reasons for the Oculus team to take so long to make it happen ;)
First of all, you need to know that people cross their eyes to varying degrees to focus their eyesight on objects that are near/far from them. If you set up your cameras perfectly parallel, everything will look good only when the user focuses on infinity. If you turn each camera a bit in a set manner, without eye-tracking, you get toe-in stereo, used in 3D movies, suffering from keystone effect. The best thing to do without proper eye tracking is to use a skewed camera frustrum - off axis projection. More on this can be found here.
There are also other problems. For example, when you turn your head, you don't just change yoru eyes orientation - you also change their absolute position in 3D space. If you simply apply your phones rotation to your cameras, the effect will be off. That's why you should use at least a head model. The current version of the CardboardSDK supports modelling a neck, to take vertical translation when looking up/down into account.
There are many other problems, such as the pincussion distorition of the image due to the headsets lenses, headtracking, calibrating everything to a specific phone, headset, lenses, the users InterPupillaryDistance... The list goes on.
All in all, no, VR is not a trivial or simple matter. The problem is, when done badly, it isn't immediately obvious. The users may not conciously know that there is something wrong, but their brain will know. It's been trained all their lives in interpeting the reality around it, and it's good at knowing when something is off. Badly done vr apps may cause disorientation, headaches, eye strain, nausea or simply provide an unsatisfying experience. Some of the big players of the VR world are afraid that many badly done VR apps will be a bad experience for a lot of people, scaring them away from the technology and preventing it from becoming popular.
All in all, if you want to do VR, either make sure you REALLY know what you are doing, or use an SDK/framework made by specialists.

- 2,419
- 1
- 21
- 26