“If we want humanoid robots to teach or have other social functions, we need them to trigger mirror neurons,” Oberman told New Scientist.

This logic doesn’t necessarily hold.

Having the biology of humans is not required to produce behavior (we likely just haven’t developed complex enough alternative “biology” to match the human biology)

There is a very big ambiguity with the phrases “humanoid” and “social functions”. That ambiguity makes it difficult to test assumptions and theories

AI != Human Ability

Is the goal of AI and robotics to produce humanlike abilities? I think many AI, robotics and computational complexity folks abandoned that goal a long time ago. Perhaps it’s attainable, but why would we want to attain humanness in things non human?

Besides, unless something IS human it’s always going to seem to some degree non-human to us. That statement doesn’t mean that non-human intelligence/complexity/robotics is incapable of complex behavior, learning, socializing, etc. etc. It means that those behaviors will play out in a way that seems different to us than they do in humans.