After reading the following:
http://developer.android.com/guide/practices/screens_support.html
How do I convert ppi into dpi for Android images?
I assumed that "dp = px / (dpi / 160)". I tested it on the LDPI screen and it worked perfectly, my first thought was "finally, a breakthrough!", but then after testing it on a HDPI screen I found it did not work so well. So then I tried "dp = px * (dpi / 160)" and found that didn't work, not that I figured it would... I tried a couple other formulas, but none of them worked. One of them was to only use the dp on the small screens and then just get the screen px for the other screens. That didn't work.
Obviously after reading how to support multiple screens I am suppose to use DP to get the correct width and height to work with. I am not quite sure why this is failing so that I why I turned to you.
Here is my code:
public class Scale {
ScreenType type;
Display display;
public Scale(WindowManager wm) {
this.display = wm.getDefaultDisplay();
// Get Display Type //
DisplayMetrics metrics = new DisplayMetrics();
wm.getDefaultDisplay().getMetrics(metrics);
int density = metrics.densityDpi;
switch(density) {
case DisplayMetrics.DENSITY_LOW:
this.type = ScreenType.LDPI;
break;
case DisplayMetrics.DENSITY_MEDIUM:
this.type = ScreenType.MDPI;
break;
case DisplayMetrics.DENSITY_HIGH:
this.type = ScreenType.HDPI;
break;
default:
this.type = ScreenType.XHDPI;
System.exit(0);
break;
}
}
public ScreenType getScreenType() {
return this.type;
}
public int getWidth() {
return this.display.getWidth();
}
public int getHeight() {
return this.display.getHeight();
}
public int getDPWidth() {
float dp = this.getWidth() / (this.getScreenType().getDensity() / 160f);
return (int) dp;
}
public int getDPHeight() {
float dp = this.getHeight() / (this.getScreenType().getDensity() / 160f);
return (int) dp;
}
}
ScreenType enum is as follows:
public enum ScreenType {
LDPI(120), MDPI(160), HDPI(240), XHDPI(320);
private final int density;
ScreenType(int density) {
this.density = density;
}
public int getDensity() {
return this.density;
}
}
I know I don't need the ScreenType enum, but not talking about this at the moment. Just need to figure out why the screen is always get the wrong size.
Now when I go to draw the background I do the following...
batcher.beginBatch(Assets.loadingAtlas);
for(int x = 0; x <= this.scale.getDPWidth(); x += 50) {
for(int y = 0; y <= this.scale.getDPHeight(); y += 50) {
batcher.drawSprite(x, y, 50, 50, Assets.backgroundPattern);
}
}
batcher.endBatch();
Then it goes to the batcher which has the following code...
public void beginBatch(Texture texture) {
texture.bind();
numSprites = 0;
bufferIndex = 0;
}
public void drawSprite(float x, float y, float width, float height, TextureRegion region) {
float halfWidth = width / 2;
float halfHeight = height / 2;
float x1 = x - halfWidth;
float y1 = y - halfHeight;
float x2 = x + halfWidth;
float y2 = y + halfHeight;
verticesBuffer[bufferIndex++] = x1;
verticesBuffer[bufferIndex++] = y1;
verticesBuffer[bufferIndex++] = region.u1;
verticesBuffer[bufferIndex++] = region.v2;
verticesBuffer[bufferIndex++] = x2;
verticesBuffer[bufferIndex++] = y1;
verticesBuffer[bufferIndex++] = region.u2;
verticesBuffer[bufferIndex++] = region.v2;
verticesBuffer[bufferIndex++] = x2;
verticesBuffer[bufferIndex++] = y2;
verticesBuffer[bufferIndex++] = region.u2;
verticesBuffer[bufferIndex++] = region.v1;
verticesBuffer[bufferIndex++] = x1;
verticesBuffer[bufferIndex++] = y2;
verticesBuffer[bufferIndex++] = region.u1;
verticesBuffer[bufferIndex++] = region.v1;
numSprites++;
}
public void endBatch() {
vertices.setVertices(verticesBuffer, 0, bufferIndex);
vertices.bind();
vertices.draw(GL10.GL_TRIANGLES, 0, numSprites * 6);
vertices.unbind();
}
Then when you draw the vertices the following code is used...
public void draw(int primitiveType, int offset, int numVertices) {
GL10 gl = glGraphics.getGL();
if(indices!=null) {
indices.position(offset);
gl.glDrawElements(primitiveType, numVertices, GL10.GL_UNSIGNED_SHORT, indices);
} else {
gl.glDrawArrays(primitiveType, offset, numVertices);
}
}
Now my two biggest questions are this: One, am I using the correct formula to calculate where I should draw and for how many times I should draw the background? Two, should I have a formula for when I draw the element to put dp back to pixels?
What happens when I fail is the following:
The yellow marks where it ok to have black and the blue X's mark where it is not ok to have black. The blue should extend much further than it is.
I understand mathematically why this is happening, but feel like there could be a better formula that would work for both LDPI and HDPI screens rather than just LDPI. This formula also makes it a tiny bit smaller on MDPI.
The math is Density-Independent Pixels (DP) = WIDTH / (DENSITY / 160). So an example would be DP = 800/(240/160) = 800/1.5 = 533
So what are your thoughts?
Thanks ahead of time for any input! If you want an SSCCE let me know, but it would take some time to make for this instance and I am hopping we won't need to do this.