UML and uses cases are relevant for any system, from the software intensive systems to human organisational systems ("business use cases", "business activity diagrams", etc). It is therefore also relevant for an autonomous system like a robot.
From the point of view of UML definitions, you could in theory interpret an obstacle to be an actor as it interacts with the system, simply by being there:
Each UseCase specifies some behavior that a subject can perform in collaboration with one or more Actors. UseCases define the offered Behaviors of the subject without reference to its internal structure. These Behaviors, involving interactions between the Actors and the subject, may result in changes to the state of the subject and communications with its environment.
- UML 2.5.1 specifications
So if it helps you to better describe the problem and your design, you can use this approach.
However, this is not a recommended approach, when using goal-oriented use-cases, since the obstacle is neither human, nor an autonomous system, and does in consequence not use the SuC to fulfill its goals nor does it contribute to the goals of the main actors. In other words, there is no active/intended interaction between the obstacle and the robot, in the sense that the obstacle does not initiate a use case nor interact on its own with the robot. The fact that an obstacle may be detected through physical interaction (sending back light, force, or ultrasound) is passive.
In your example of warning system for a car, the main actor would be the driver with a goal to drive safe to the destination, which may include the goal of avoiding obstacles. Pedestrian would benefit from this technology as stakeholder but not as actor.
The analysis would of course be different if the "obstacle" would be meant to be a real actor (e.g. a human playing with the bot) or if using a "mis-use case" i.e. a second modelling technique based on use-cases where actors are people trying to trick the system (e.g. a thief vs an police bot).