The first chatbot to seemingly pass the Turing Test was Weisenbaum's Eliza, modeled on a Psychoanalytic technique of reflecting your last statement back to you as a question. In spite of knowing that Eliza was only a program, users were converts and often found it 'better than therapy'.
We know that empathy can be quite successfully enacted in robots and AI, so you raise an important point. For whose benefit? Who benefits from empathic or sycophantic systems? This is the major flaw, although you rightfully call out the potential for them to shift to harmful without warning.
There is also untapped potential for AI to design fairness into systems, of representative government, or resource distribution, using Rawl's Theory of Justice perhaps. But sadly I can't see any commercial incentives for this to happen.
That's very interesting about Eliza, and I think lines up well with what consumers are saying about ChatGPT.
Previous experiments with LLMs (not the one i discussed in this paper) often saw a noticeably lessened (even total loss of) effect if people knew it was an AI. I wonder why sometimes this matters but not others?
People generally prefer AI/robots for any discussion of emotions, memories and things that could be shameful, or impact their social standing with others. This is because they know/trust that their conversation WON'T generate a human reply. Similar to a therapist or a confessional, or perhaps the confidence of a lawyer.
People prefer human connection if they can see power/negotiation/potential accruing to them through the interaction.
*Weisenbaum's Eliza
The first chatbot to seemingly pass the Turing Test was Weisenbaum's Eliza, modeled on a Psychoanalytic technique of reflecting your last statement back to you as a question. In spite of knowing that Eliza was only a program, users were converts and often found it 'better than therapy'.
We know that empathy can be quite successfully enacted in robots and AI, so you raise an important point. For whose benefit? Who benefits from empathic or sycophantic systems? This is the major flaw, although you rightfully call out the potential for them to shift to harmful without warning.
There is also untapped potential for AI to design fairness into systems, of representative government, or resource distribution, using Rawl's Theory of Justice perhaps. But sadly I can't see any commercial incentives for this to happen.
That's very interesting about Eliza, and I think lines up well with what consumers are saying about ChatGPT.
Previous experiments with LLMs (not the one i discussed in this paper) often saw a noticeably lessened (even total loss of) effect if people knew it was an AI. I wonder why sometimes this matters but not others?
People generally prefer AI/robots for any discussion of emotions, memories and things that could be shameful, or impact their social standing with others. This is because they know/trust that their conversation WON'T generate a human reply. Similar to a therapist or a confessional, or perhaps the confidence of a lawyer.
People prefer human connection if they can see power/negotiation/potential accruing to them through the interaction.
unfortunately reports/papers on AI often have very little background in the actual study of human-robot-computer interaction
Makes sense