I was testing a vector graphic and a nature picture to get a sense of darker spots and I'm seeing a lot of triangles. how can increasing the density of vertices be in the result? What's puzzling me is that the density of vertices is not the same as the pixels from the original image. I have no intention of doing ai upscaling, but the source image for the nature scene is a 4k image, and yet, the vertices are super large. sections of pixels are counting for 1 vertice. I'm using OpenCV to get a height map and trimesh with numpy for mesh manipulation. here's a link to my code: https://gist.github.com/mcneds/85955b94627265e6b085f949e5172e3e
example image
I've tried to add a bit of code to the mesh creation function and a height map function that enforces a bit of logic to treat pixels as groups. it would take the resulting scaled height map to fit the lithophane size, and divide its pixels into average color groups, all the way down to a 1:1 ratio of pixels:vertices, but all I got were a bunch of index errors.