11

There are many approaches when it goes about running untrusted code on typical CPU : sandboxes, fake-roots, virtualization...

What about untrusted code for GPGPU (OpenCL,cuda or already compiled one) ?

Assuming that memory on graphics card is cleared before running such third-party untrusted code,

  • are there any security risks?
  • What kind of risks?
  • Any way to prevent them ?
    • Is sandboxing possible / available on gpgpu ?
    • maybe binary instrumentation?
    • other techniques?

P.S. I am more interested in gpu binary code level security rather than hight-level gpgpu programming language security (But those solutions are welcome as well). What I mean is that references to gpu opcodes (a.k.a machine code) are welcome.

Grzegorz Wierzowiecki
  • 10,545
  • 9
  • 50
  • 88
  • Thanks Navi for answer. Assuming I would use separate gpu card for computations (for example older Tesla...). How to make such executions of untrusted code secure? – Grzegorz Wierzowiecki Jan 09 '11 at 02:07

2 Answers2

2

The risks are the same as with any C program. Plus you can make your whole Desktop freeze. I managed to do that once, by executing a very long calculation. The effect was that the screen did not update anymore so for instance the time on the clock widget did not change for that period. So you should use two graphics cards - one for the GPU stuff.

Navi
  • 8,580
  • 4
  • 34
  • 32
  • Ok, so let's assume we have one separate card for cpu processing. What risks are then? You've wrote "the same as with any C program", actually... "C" program can easily execute "system()", what might happen in above case? – Grzegorz Wierzowiecki Aug 11 '11 at 19:20
2

GPU code can definitely be risky. Current GPUs do not provide memory protection, so essentially, every GPU kernel can access all video memory. I'm not sure if it is possible to access the host's memory as well (via memory mapping maybe?). It's not possible to preempt kernels, they can "hog" the GPU and this causes freezes if it is used for graphics output, too. (Usually the driver will terminate kernels that don't exit after a few seconds)

Supposedly, AMD's new GPU series do have some memory protection features, but I doubt they are used at the moment. It's possible to split up the GPU multiprocessors into multiple segments with current gen hardware (GeForce 4xx+, Radeon 6xxx+), but that's not really the same as real time-sliced, preempted multitasking. ;)

dietr
  • 1,238
  • 11
  • 24
  • Actually, NVIDIA's GPUs have had memory protection (with a MMU) from at least the 8000 series. I don't know about ATI. For example, it shouldn't be possible to cause privilege escalation from a user space process by using GPU code. – wump Jan 19 '11 at 15:25
  • wump: are you sure? Since it seems to be pretty clear that there is no memory protection. Writing outside your allocated memory buffers can cause all kinds of odd things to happen, including host system crashes (on G80/GT200, haven't tested on GF100). There's certainly no MMU active to protect memory, even if it exists. – dietr Jan 23 '11 at 04:28
  • Indeed, it's possible to crash the GPU by writing outside of allocated memory buffers. If this is the same CPU as used for rendering, your system will crash. I've always assumed this is due to bugs in the driver. The MMU is there and active though, and prevents a process from writing into some other process' memory space. I *think* if you have a separate GPU it's not possible to crash your system this way. – wump Feb 08 '11 at 13:22
  • 3
    After discussing this with someone else, we came to the conclusion that while GPUs do have an MMU (to provide virtualized addressing), it doesn't provide memory protection. MMUs do not necessarily have this functionality. – dietr Feb 09 '11 at 12:59