Sekoutoure Abodunrin
Sekoutoure Abodunrin
June 10, 2025 at 12:04 PM
(Not an easy read. However, think about it) James speaks of the wisdom from above - and he gave its characteristics as pure, then peaceable, gentle, and easy to be intreated, full of mercy and good fruits, without partiality, and without hypocrisy (See James 3:17). Today, there is “wisdom” from AI - and either the AI or our use of it reminds more of James 3:14-16. We must subject relational “wisdom” acquired from AI to the James 3:17 rule. Otherwise we might be in James 3:14-16. Increasingly, more and more saints are treating such AI wisdom as though it is authoritative, like God’s word. It is becoming a crutch. It is being allowed to truncate human interaction. Folks interact with each other, then give up on each other and instead, turn to AI to label the other person, or even attempt to explain the other person’s intentions! We should always make sure to be certain about what another human means or intends, rather than going off to AI. Although AI is a great enhancer and disruptor that has come to stay, we must remember that there are inappropriate uses of AI. Sometime ago, in a group meeting in our church in England, the saints bombarded me with questions about AI in church music, AI in bible study, etc. There is the troubling challenge of AI being used increasingly by saints to label each other as a result of subjecting each other’s words to AI analysis. The questions and use cases set me thinking. When it comes to AI there is a potential for bias. AI algorithms often reflect biases present in the data they are trained on, which can have negative consequences for individuals when AI judgment is relied upon. In its current level of implementation, AI lacks emotional and psychological sensitivity. While AI can be a valuable tool, it's crucial to consider its limitations and potential impact on human connection and relationships. We ensure that AI doesn't (un)consciously replace the essential responsibility of each saint to develop discernment in human interaction and empathy in our relationships. Wherever you might have found yourself using AI in a relational context, always ensure an experienced professional conducts the final review rather than blindly trusting AI. It is safe to never let AI make the final call in a relational context. Why? AI struggles unpacking complex emotional feelings ….. and humans are complex. I usually advise our Gen Z, for they are more embracing of newer technology, against using AI for arriving at relationship decisions without human judgment. Take the case of a wife using AI to "prove she’s right" or to "prove her husband is wrong or toxic” in a verbal exchange - based on voice note, chat, etc. For example, A wife aiming to submit a voice note from her husband to AI analysis is on slippery ground. Ethically, both parties in a conversation need to agree to AI analysis. Such a wife should ask her husband for consent. If said wife tells her husband that he was "wrong" or "toxic" based on AI judgment, such a move can cause more harm than good. You see, AI routinely misinterpret jokes, irony, sarcasm or inside jokes, and so is prone to labelling a light argument a serious fight. If one human can't be sure that they rightly interpreted the mind of another human they have lived with for years. Then, can AI? … to be continued
❤️ ❤‍🔥 📌 🤔 🔥 🙏 30

Comments