Page 8 - PC Winter 2023-24
P. 8

8 WINTER 2023 Generative
Artificial Intelligence
Part of the Divine Tapestry or a Travesty?
PRESBYTERIAN
Connection
presbyterian.ca
 By Callie Long, Communications Office
The Oxford English dictionary de- fines artificial intelligence (AI) as “the theory and development of computer systems able to per- form tasks that normally require human intelligence, such as visu- al perception, speech recognition, decision-making, and translation between languages.” Wikipedia describes it as “the intelligence of machines or software, as op- posed to the intelligence of hu- mans or animals.”
Artificial intelligence is not new and has been quietly employed in the background for quite some time. What is new and exciting or frightening, depending on where you stand, is what we think of as generative AI in the form of, for instance, ChatGPT-3, a large lan- guage model available to anyone with access to the Internet (and it’s more robust and subscription companion, GPT-4). There are also other forms of this AI—some better than others.
I thought it would be interesting to ask ChatGPT-3 to define itself, what it is, what it represents and its limitations. This is how the program answered:
I am ChatGPT, a [large] lan- guage model created by Ope- nAI. I am a computer program that uses artificial intelligence (AI) and natural language pro- cessing to generate text-based
responses to questions and prompts. I represent a power- ful tool for natural language understanding and generation, and I can perform a variety of tasks, including answering questions, generating text, providing explanations, and engaging in conversations on a wide range of topics. Regarding its self-analysis of
inherent limitations, the program offers that it lacks personal con- sciousness: “I am not a sentient being. I do not possess thoughts, emotions, or awareness. I am just a machine processing text based on patterns and information from the training data.”
For this article, I wanted to fo- cus on generative AI from a faith- based perspective. As in all faiths, ethical and moral considerations play a role when making sense of this new trend in technology. We can, for instance, say that from a Presbyterian perspective, we pro- mote the well-being of all living beings. We, therefore, must ask what impact this new advance has on society, given that the ethi- cal, philosophical and moral im- plications of AI from a faith-based perspective vary.
From a Presbyterian perspec- tive, the implications of AI can be framed by considerations such as human dignity, responsibility and stewardship, justice and fairness, accountability, discernment, and ethical frameworks, to name a
few. While this is a broad-based view, we also know that individual perspectives come in all shapes and sizes, meaning that opinions will differ, even within our denom- ination. Be that as it may, when it comes to responsibility and stew- ardship, for instance, as Presby- terians, we often emphasize the concept of the responsible man- agement of God’s creation, which means that we should strive for ethical AI development that in- volves stewarding technology in a way that benefits humanity and protects the environment.
The Presbyterian Church in Canada does not have a policy addressing the recent advances in generative ar tificial intelligence, but our doctrine promotes princi- ples of justice and equity, so from an ethical and moral perspective, AI should not perpetuate or exacer- bate social, economic or racial in- equalities. This means that devel- opers and users of AI have a moral obligation to ensure that these sys- tems are used to promote justice and fairness in society.
Here’s the rub. It is a well-known and acknowledged fact that the current iteration of generative AI is rife with bias. However, address- ing bias in these large language model systems is complex, given that generative AI models, such as GPT-3, are trained on massive datasets sourced from the Inter- net, which often reflect the biases present in the text available online. This translates into the AI models learning and perpetuating discrimi- natory and biased information. More troubling is that these models do not create content from scratch. They synthesize and generate text based on patterns, meaning any biases in the training datasets will be amplified, given that the mod- els are designed to predict what comes next based on what they’ve seen before. Even when training data is carefully curated to remove explicit biases, there is always the danger of these models exhibiting implicit bias, which can be so sub- tle that it’s challenging to detect and mitigate. Generative AI mod- els do not understand context or exercise critical thinking. Instead, they generate responses based on statistical associations, which can
and do reinforce stereotypes or prejudices regardless of the social and individual implications.
On a personal note, I find it fascinating that GPT-3 refers to itself as “I,” which does imply a sense of personhood or self—a topic that requires an article all to itself! But I wanted to mention it here, mainly because there have been insightful writings and mus- ings about our human tendency to anthropomorphize these large language models, and some peo- ple claim that these models are sentient. A point-of-view article by Nir Eisikovitsnir posted to the online magazine FastCompany notes that there is a natural “ease with which people project human features onto our technologies, rather than the machines’ actual personhood.” The writer argues that “popular culture has primed” us to imagine “dystopias in which artificial intelligence discards the shackles of human control and takes on a life of its own” (fastcompany.com/90867578/ chatbots-arent-becoming-sen- tient-yet-we-continue-to-anthro- pomorphize-ai).
For a particular generation, all you must do is invoke the name “Hal” for these flights of fantasy to run riot in one’s brain. Ei- sikovitsnir, however, writes that “these worries are—at least as far as large language models are concerned—groundless” and that these machines are no more than “sophisticated sentence completion applications” with the uncanny ability to mimic hu- man communication. However, Eisikovitsnir warns against the “tendency to view machines as people and become attached to them, combined with machines being developed with humanlike features [that] point to real risks of psychological entanglement with technology.” For the writer, this calls for “strong guardrails to make sure that the technologies don’t become politically and psy- chologically disastrous,” given the real danger of people being “politically and emotionally ma- nipulated” by such machines.
In a world of constant advances in technology, from a Reformed faith tradition, we are called to en-
gage with this evolving landscape thoughtfully and reflectively, seek- ing to understand the implications of AI from the perspective of what we believe. We believe God is the creator of all things, weaving our world’s intricate tapestry. From this perspective, we can then ask whether AI is yet another thread of the tapestry that is our existence. If we are made in God’s image, is AI another product—a prod- uct of human ingenuity inspired by the divine spark of creativity that threads through us? Can we marvel at how human creativity mirrors the sacred, or should we be more vigilant in exercising re- sponsible stewardship over these newfound principles? It is a bit of both, always bearing in mind that we are always called to justice and equity and an ethical engagement with the world in which we live.
To the question of whether the data used to train GPT-3 was ethi- cally sourced, this is the answer I received:
I don’t have direct knowledge of the specifics of the training data used for my development. However, I was trained on a mixture of licensed data, data created by human trainers, and publicly available data. Ope- nAI, the organization behind Assistant, has made efforts to ensure that the data used is ethically sourced and in com- pliance with relevant regula- tions and guidelines.
Open AI is committed to ethical AI development and has estab- lished guidelines to address issues such as bias and fair- ness in AI systems. They aim to minimize potential biases and to continually improve the fairness and transparency of AI models like mine. If you have concerns about the ethics of data sourc- ing, I recommend reaching out to OpenAI directly for more de- tailed information on their data collection and model develop- ment processes.
I responded (instinctively, as though I would to a human being): No, this is fine. I am researching an article on generative AI and the ethics that guide it, so this is helpful to include. Thank you!
In reply, GPT-3 wrote the fol- lowing:
You’re welcome! I’m glad I could provide some informa- tion that can be helpful for your article. If you have any more questions or need further infor- mation related to generative AI and its ethical considerations, feel free to ask. Good luck with your research and article! *OpenAI. (2023). ChatGPT (September 25 Version) [Large language model].
All that was missing was a smiley face emoji!
 More information: The Canadian Council of Churches started a new series on artificial intelligence earlier this year in May with a fascinating webinar, “Is Intelligence Enough?” available at bit.ly/CCC-AI-Webinar or on the CCC’s YouTube channel bit.ly/ccc-webinar-ai.
Also, read more about the societal risks of ar tificial intelligence in the report “Enabling a Rights-based AI?” posted to the World Association for Christian Communication’s website at waccglobal.org/enabling-a-rights-based-ai that documents the continued need for government, corporate and civil society to work together with a rights-based approach that first and foremost protects the freedom of expression and access to information. The report argues that regulation should be based on human rights, transparency and independent oversight.



































































   6   7   8   9   10