In 1964, Joseph Weizenbaum, a computer scientist
at the Massachusetts Institute of Technology,
developed a chatbot called Eliza,
modelled on a “person-centred” psychotherapist:
whatever you said, it would mirror back to you.
If you said “I feel sad”, Eliza would respond with:
“Why do you feel sad?”, and so on.
Weizenbaum actually wanted his project to demonstrate
the superficiality of human communication,
not to be a blueprint for future products.
A.I. will be credible only when it tells us how stupid and irrational we are.
Perhaps it requires more input?
Post a Comment