0

I was testing a vector graphic and a nature picture to get a sense of darker spots and I'm seeing a lot of triangles. how can increasing the density of vertices be in the result? What's puzzling me is that the density of vertices is not the same as the pixels from the original image. I have no intention of doing ai upscaling, but the source image for the nature scene is a 4k image, and yet, the vertices are super large. sections of pixels are counting for 1 vertice. I'm using OpenCV to get a height map and trimesh with numpy for mesh manipulation. here's a link to my code: https://gist.github.com/mcneds/85955b94627265e6b085f949e5172e3e

example image

Untitled.png

I've tried to add a bit of code to the mesh creation function and a height map function that enforces a bit of logic to treat pixels as groups. it would take the resulting scaled height map to fit the lithophane size, and divide its pixels into average color groups, all the way down to a 1:1 ratio of pixels:vertices, but all I got were a bunch of index errors.

Ajeet Verma
  • 2,938
  • 3
  • 13
  • 24
Mcneds
  • 1
  • 1
  • Welcome to Stackoverflow! Please don't just paste a link to a repo but add minimal reproducible sample code to your question that addresses one specific issue. You can use code markups to format your question. Please read [ask]! – Markus Apr 22 '23 at 10:48

0 Answers0