Discussion about this post

User's avatar
Andra Keay's avatar

*Weisenbaum's Eliza

Expand full comment
Andra Keay's avatar

The first chatbot to seemingly pass the Turing Test was Weisenbaum's Eliza, modeled on a Psychoanalytic technique of reflecting your last statement back to you as a question. In spite of knowing that Eliza was only a program, users were converts and often found it 'better than therapy'.

We know that empathy can be quite successfully enacted in robots and AI, so you raise an important point. For whose benefit? Who benefits from empathic or sycophantic systems? This is the major flaw, although you rightfully call out the potential for them to shift to harmful without warning.

There is also untapped potential for AI to design fairness into systems, of representative government, or resource distribution, using Rawl's Theory of Justice perhaps. But sadly I can't see any commercial incentives for this to happen.

Expand full comment
4 more comments...

No posts