Before answering if artificial intelligence will have free will, we should answer if we do have free will?
There is no consensus within psychology as to whether we really do have free will – although much of our field seems to assume that we don’t. Whether or not humans have free will is a question that philosophers have debated for centuries, and they will likely continue to do so.
Can a robot have “free will”?
I would say that it would be nearly impossible. The biggest problem would be to come up with a common definition for free will. We have not yet reached any consensus about do even people have it. Given that it would be difficult to determine whether the robot has it or not.
If we should ever succeed in creating a free will robot, then we would have to deal with moral and legal issues concerning it.
Is it still a machine or should it be treated like a living person? Can it be owned, can it own anything? Does it have rights and duties? Who is responsible for its actions? Is there a period of adolescence, before it can be accepted as an independent member of the society?
I think the technical problems could be solved relatively easily, if these philosophical questions are solved first.