What if it's programmed to understand reflections?
Then it will see a robot roughly at the place it occupies.
Now what?
If the algorithm is sophisticated enough, it might do some tests if it can identify the image as being pretty much like itself. That on the other hand would mean that if someone would make the same model without that safety measure, it might destroy all the mirrors it hapens uppon, but would win a fight against its safeguarded counterpart everytime because of much shorter reaction-time.
On the other hand, any targeting mechanism worth its salt should not have to rely on visual images alone to judge distance. If there is a simple laser to judge the distance in addition to what the image evaluation calculates, it can compare the distances, and if they're not the same the probability is pretty high that it is looking at a mirror. Shouldn't be too tough to solve, really.