Has Big brother become baby brother? A look at AI

As artificial intelligence (AI) continues to evolve and impact every aspect of our lives, Christians are grappling with how to approach this rapidly advancing technology. Some Christians view AI as a threat to humanity, while others see it as a tool that can be used for good. In this article, we will explore the Christian perspective on AI, examining the theological implications of this technology and the ethical considerations that must be taken into account.

As C.S. Lewis once said, “We may be content to remain what we are, but change is happening, whether we like it or not.” In the context of AI, this quote is particularly poignant. AI is changing the way we live and work, and it is essential that we consider its implications from a Christian perspective.

The above introduction is compliments of Chat GPT.  A computer composed it for me in response to an instruction that I put in.  The instruction I put in wasn’t fancy coding.  It was simply “Write an introduction to an article on AI from a Christian perspective and include a quote from CS Lewis.”

Are you impressed? Disconcerted perhaps? Wondering if the rest of the article is written by a computer?

We’re living in a new AI era (and everything henceforth, unless otherwise indicated, is from me, and I confirm I am a real live human and not a chatbot).

We’re all familiar with the role of Artificial Intelligence (AI) in our day-to-day living.  From using a calculator to solve math sums through to doing google searches through to credit score checks through to online stores such as Amazon prompting us with product suggestions through to facial recognition software used for surveillance, AI is literally everywhere. AI even helped me solve an issue with my washing machine by running a diagnostic via my phone.

And, barring a solar flare hitting us and wreaking havoc with all things computer oriented, AI shows no signs of slowing down, but is rather advancing exponentially.  Should we be excited or worried?  This article will consider the current state of AI, the predicted state of AI and offer some closing thoughts in terms of a Christian response.

The Current State

The question regarding whether we should be worried over the tangential development of AI is a legitimate question because what used to be the fodder of sci-fi movies is now in many cases reality.  Even before sci-fi movies were a thing, we had some rather prescient novelists write about dystopian futures based on what they saw happening.  In a now-famous quote American academic and cultural critic Neil Postman (who was not a technology fan) said the following in the foreword to his book “Amusing ourselves to death: Public Discourse in the age of show business” (1985)

We were keeping our eye on 1984. When the year came and the prophecy didn’t, thoughtful Americans sang softly in praise of themselves. The roots of liberal democracy had held. Wherever else the terror had happened, we, at least, had not been visited by Orwellian nightmares.

But we had forgotten that alongside Orwell’s dark vision, there was another—slightly older, slightly less well known, equally chilling: Aldous Huxley’s Brave New World. Contrary to common belief even among the educated, Huxley and Orwell did not prophesy the same thing. Orwell warns that we will be overcome by an externally imposed oppression. But in Huxley’s vision, no Big Brother is required to deprive people of their autonomy, maturity and history. As he saw it, people will come to love their oppression, to adore the technologies that undo their capacities to think.

What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture, preoccupied with some equivalent of the feelies, the orgy porgy, and the centrifugal bumblepuppy. As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny “failed to take into account man’s almost infinite appetite for distractions.” In 1984, Huxley added, people are controlled by inflicting pain. In Brave New World, they are controlled by inflicting pleasure. In short, Orwell feared that what we hate will ruin us. Huxley feared that what we love will ruin us.[1]

Of course in Christian circles we have our own sci-fi writer; none other than C S Lewis himself.  He wrote The Space Trilogy (also known as Cosmic Trilogy) between 1938 and 1945, about a dystopian future and in the final installment, entitled That Hideous Strength, the plot is centred around an organisation called National Institute of Coordinated Experiments (NICE for short), whose aim is to free humanity from nature and develop the scientific reconstruction of the human race in the direction of increased efficiency.[2]  This trilogy was his way of putting into narrative form the concerns he had which he wrote about in his essay The Abolition of Man in which he pointed out the danger of taking natural law values out of the sciences.

Debates around AI have been around for a while, but a few things in recent history seemed to have brought the discussion back into the forefront of public discourse, especially around the issue of privacy, such as the 2018 Cambridge Analytica scandal which exposed how Facebook data was used to manipulate elections.    In 2020 The Social Dilemma was shown on Netflix, which for many was an eye-opener in terms of discovering how Big Tech uses our data to capitalize off us through clever use of algorithms.  In terms of AI algorithms capacity to analyse personal data, it takes only 68 Facebook likes, regardless of what sort of posts, to predict patterns of alcohol consumption, sexual orientation and mental health[3]

China is notorious for its use of surveillance AI.  The same technology however exists in the West, but whereas in China it’s about state control, in the west it’s about commercial gain.  There’s a term for it: surveillance capitalism.  As Harvard Professor Zuboff explains:

Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioural data.  Although some of these data are applied to product or service improvement, the rest are declared as a proprietary behavioural surplus, fed into advanced manufacturing processes known as “machine intelligence,” and fabricated into prediction products that anticipate what you will do now, soon, and later.  Finally, these prediction products are traded in a new kind of marketplace for behavioural predictions that I call behavioural futures markets.  Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are eager to lay bets on our future behaviour.[4]

In terms of governments wanting to spy on citizens, they can do so through software called Pegasus.  It can be installed remotely on your phone without your awareness. Wiki describes it as follows:

Pegasus is spyware developed by the Israeli cyber-arms company NSO Group that can be covertly installed on mobile phones (and other devices) running most versions of iOS and Android. Pegasus is able to exploit iOS versions up to 14.7, through a zero-click exploit. As of 2022, Pegasus was capable of reading text messages, tracking calls, collecting passwords, location tracking, accessing the target device’s microphone and camera, and harvesting information from apps. The spyware is named after Pegasus, the winged horse of Greek mythology. It is a Trojan horse computer virus that can be sent “flying through the air” to infect cell phones.[5]

Added to the mix of privacy concerns is the current debate in the USA on whether TikTok should be banned nationwide.  As it stands it is now banned on government issued phones (with Belgium recently following suit), owing to security threat concerns since it is a Chinese-developed app.

Phenomenal advancements in robotics have also brought the discussion back, along with a currently released movie called Megan about an AI doll that starts murdering perceived threats to the child she is called to protect.

One of the icons of AI is a robot by the name of Sophia, who back in 2016 represented the most advanced social humanoid robot available, and was made by Hong Kong based company Hanson Robotics.  Sophia has become very famous, having been covered by media around the globe, and has participated in many high-profile interviews. In October 2017, Sophia was granted Saudi Arabian citizenship, and became the first robot to receive citizenship of any country.  In November 2017, Sophia was named the United Nations Development Programme‘s first Innovation Champion, and is the first non-human to be given a United Nations title.[6] But there’s already impressive competition by a recently developed robot called AMECA (you’ll see comparisons on YouTube – also check out Romela Artemis).

Another factor bringing AI into the forefront again are the incredible advancements in the language models used for chatbots.  For example, last year a senior google engineer, Blake Lemoine, claimed his chatbot was sentient.  The story is available online, along with the conversation between Blake and the chatbot (it’s even been dramatized).  An interesting side fact is that Blake also claims to be a Christian mystic priest, and simply said who was he to decide where God should put a soul?  The ‘conversation’ is indeed interesting, as it even included the chatbot expressing fear about Google shutting it down.

More recently, an internet storm has been created over the famous OpenChat GPT (short for Generative Pre-trained Transformer) application (along with the image-creation version called Dall-e).  If AI is “the display by a machine of any cognitive process that we would expect to be done by a person,”[7] then Openchat GPT and its artistic equivalents like Dall-e or Midjourney have definitely brought AI into a new era.

What is ChatGPT? Let me ask …This is the answer it gave me:

I am ChatGPT, a large language model developed by OpenAI. I have been trained on a vast amount of text data using deep learning techniques to understand human language and generate human-like responses. I can assist with a wide range of tasks, such as answering questions, providing information, offering advice, or engaging in conversation. How may I assist you today?

I then asked ChatGPT if it was therefore Generative AI, and this is how it answered:

Yes, I am an example of a generative AI model. Generative AI refers to a class of artificial intelligence algorithms that are capable of generating new, original content that did not previously exist. These models can generate a wide range of outputs, including text, images, music, and even video.

Generative AI models like me are trained on large datasets of existing content and then use this training to generate new, similar content. This can be useful in a variety of applications, such as language translation, image generation, and content creation.

However, it’s important to note that while generative AI has many exciting applications, there are also concerns about the potential misuse of this technology, such as deepfake videos and fake news. As with any powerful tool, it’s essential to use generative AI responsibly and ethically.

 

I think you see the big deal.  Chat GPT has the capacity to generate content that is original, and the responses are human-like (note: the dataset it was trained on was up until November 2021).  As you can see above, it even comments on itself and answers me as if it were a person.  You can imagine the questions such technology raises: who is doing the interpreting of the data?  Does this mean we’ll be raising a generation that doesn’t know how to think?  Is it plagiarism if a computer has written the essay for you?

It’s arguably the closest we’ve come to passing The Turing Test (also known as the Imitation Game), a classic (but controversial) test proposed by Alan Turing in 1950.  Turing (a British mathematician who also helped crack Nazi codes during WWII and regarded as father of computer science) proposed that if this test was passed, then the machine possesses intelligence (or at the very least an imitation of it!).  Imagine you are sitting in a room, and in another room there is a computer, and in another room there is a person, and you are being asked to communicate with both via text, but you don’t know which room the computer is in and which room the person is in.  If, based on the responses you are getting, you are unable to identify which conversation is with a computer, then, according to the Turing Test, the computer has passed.  At this stage there are conflicting reports on whether ChatGPT has passed this test.

If you haven’t already tried it, you may want to do so just to experience it.  If you are in ministry, ask it to summarize the gospels or to explain various doctrines.  You can ask it to compose a sermon on a particular text and ask it to reference theologians and writers you like (tip: give a word count limit).  Ask it to explain the gospel in the form of a Shakespearian sonnet (I’ve done that).  If you’re in programming, ask it to write code for you.  Need help understanding Einstein? Ask.  Then try not to feel redundant.  It has even “passed” high level exams in law, business and medicine.[8]  One can understand that universities are investing in software that determines whether student submissions have been composed by Chat GPT or similar software! Chat GPT  has created such a stir in the AI world with Microsoft investing $10 billion dollars in Open AI (the company that developed it) and plans to integrate it into its own software and search engine Bing (you can join the waiting list if you want).  This caused panic at Google, who shortly afterwards launched their own AI equivalent called Bard, and which unfortunately because of a mistake made by the AI in answering a question at launch, sent Google stocks crashing down.  One does wonder if “doing a google” is going to be replaced with “doing a chat” or “doing a Bing.” And as evidence of how rapidly AI is advancing, Chat GPT has just released a new version (Chat GPT 4) which has even greater capacity (which can be discovered online – it’s impressive!)

Although the technology has been around for a while, what is new is that it is being made available for public use, essentially making us guinea pigs to enable the software to learn more and become ‘smarter.’    It is nowhere near perfect because incorrect answers are a reality, but in short we can say the AI war is heating up in Silicon Valley.  Elon Musk (already in the AI game with his Neuralink company, in which the aim is to develop a chip which can be installed in the brain as a thought/neural interface so that simply through thinking you can instruct the computer what to do)[9] is also setting his sights on developing an AI application that rivals ChatGPT, as this current application, in his opinion, is too woke (just prior to publication of this article I have already come across ‘newcomers’ on the scene to rival Chat GPT).

But in a classic twist, in the latest edition of the journal Neural Computation, Sejnowski proposes that what we are experiencing (because of the sophistication of the Large Language Model (LLM) used in ChatGPT) is a Reverse Turing Test: that what appears as intelligence in the application is in reality a mirror reflecting the intelligence of the interviewer.  This proposal is made based on the divergent responses made by ChatGPT. He summarises the current state of the AI debate well when he says:

Something is beginning to happen that was not expected even a few years ago. A threshold was reached, as if a space alien suddenly appeared that could communicate with us in an eerily human way. Only one thing is clear: LLMs are not human. But they are superhuman in their ability to extract information from the world’s database of text. Some aspects of their behavior appear to be intelligent, but if it’s not human intelligence, what is the nature of their intelligence? [10]

Everything I’ve shared thus far falls under only one category of AI – Narrow Artificial Intelligence (or weak AI).  This is simply machines or computers doing for us what would normally take a person of intelligence to do, albeit taking a considerable time longer.  The computers have been programmed or taught how to read data and undertake tasks.  They do it really well to the point that the models that are developed make it seem as if we are dealing with a real person.

It’s an illusion.  The computers literally have no idea what they are doing.  But it is setting the stage for what many see as the next stage of evolutionary development: the development of Artificial General Intelligence (or Strong AI), and ultimately Artificial Superintelligence.   IBM explains Artificial General Intelligence and Artificial Superintelligence as follows:

Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)—also known as superintelligence—would surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn’t mean AI researchers aren’t also exploring its development.[11]

The Predicted State

It is anticipated by many in the field that AI will indeed ultimately reach AGI and then ASI.  The term given to when machine intelligence surpasses human intelligence is ‘Singularity.’  The understanding is that through AI, humans will be enabled to be enhanced to a degree not known before, ultimately transforming humans to moving beyond human (the idea of cyborg or transhumanism).  Science fiction has now become serious science.  Speculation has turned into contemplation.

One example of how seriously this is being taken is that in the latest 2022 State of AI Report, it is highlighted that the UK is taking the lead in addressing AI safety concerns.  Here are some quotes from the UK’s national strategy report:

  • While the emergence of Artificial General Intelligence (AGI) may seem like a science fiction concept, concern about AI safety and non-human-aligned systems is by no means restricted to the fringes of the field.”
  • “We take the firm stance that it is critical to watch the evolution of the technology, to take seriously the possibility of AGI and ‘more general AI’, and to actively direct the technology in a peaceful, human-aligned direction.”
  • “The government takes the long term risk of non-aligned AGI, and the unforeseeable changes that it would mean for the UK and the world, seriously.”

Stephen Hawking once said:

The development of full AI could spell the end of the human race.  Once humans develop AI, it will take off on its own and redesign itself at an ever-increasing rate.  Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded[12]

Ray Kurzweil is a computer scientist and futurist who believes we are approaching the ‘age of spiritual machines’ and believes that 2043 will be the year the Singularity is achieved[13]

Consider this perspective by highly respected astronomer Martin Rees, who after pointing out the advantages of silicon-based computers over our aging biological brains says:

Abstract thinking by biological brains has underpinned the emergence of all culture and science.  But this activity – spanning tens of millenia at most – will be a brief precursor to the more powerful intellects of the inorganic, post-human era.[14]

Max Tegmark, in his book “Life 3.0: Being Human in the Age of Artificial Intelligence” defines all earthly life as ‘self-replicating information process systems.” He distinguishes between Life 1.0 which is simple and biological (unable to redesign either its hardware or its software during its lifetime);  Life 2.0 is human and biological: it can redesign much of its software (through culture) but not its hardware.  Finally, Life 3.0 (which doesn’t yet exist although we’re nearly there) is non-human and post-biological or technological: it can dramatically redesign not only its software but its hardware as well.[15]

Which brings me to arguably one of the most creative thinkers around: Yuval Harari.  In his book “Homo Deus” – literally meaning ‘human gods,’ Yuval contends that ultimately our future will be one where data is more revered than humankind, and that the trajectory of humanism is that we will seek to solve the problem of happiness and death, and ultimately transition to a new species, the Homo Deus.  His book ends by posing this question: What will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?[16]

The above is just a sampling from ‘dipping’ into the field of AI.  It truly is a fascinating field and one that Christians need to be aware of and start thinking through our response to it.

Personal Concluding thoughts 

Big brother isn’t becoming baby brother; it’s just becoming a bigger brother!

As I researched this area of AI, I was struck by how in many of the sci-fi contexts it was expected that the future robots would become evil and overthrow us (famous sci-fi writer Isaac Asimov even integrated 3 Laws of Robotics in his stories where robots were programmed to not harm humans: this was made into the movie “I, Robot” starring Will Smith).  This sinister expectation is clearly a projection of our own recognition that we do have an inherent sinful nature, and since AI is made in our image, we expect it (in the scenario of AI developing consciousness) to imitate us.

As with any technology, the problem lies with how it is used.  AI, unsurprisingly,  seems to amplify the good or the bad in people.[17]  In this vein, one can’t help but discern that the spirit that built Babel is alive and well in the development of AI (it even uses language models!).  It is furthermore no secret that AI is developing at a pace faster than the ethical foundations required to assist with boundaries. As Nathan Mladin from Theos put it:

Technology is morally freighted rather than neutral. It always brings with it an implicit vision of the human and the good, and exerts a significant but often subtle shaping influence on those who interact with it[18]

I was also struck by the irony of evolutionary thinking as the paradigm of advancement, seeing humanity as simply a stage in the ultimate goal of post-biological life.  AI is everything but chance – it is the result of intelligent programming and design by people.  This means it is not inevitable that we will ultimately become post-human, since we are the ones that can control to what extent AI is developed.  But since we have this capacity to create models of intelligence, surely this points (from an apologetics perspective) to us also being designed?  After all, how can we trust what we make if we are purely products of chance?

Then there is the big question of what does it mean to be human?  Although there is research into emotions and personality, the AI pundits nonetheless seem to reduce humanity as being a function of intelligence, and not much more than that.  Of course as Christians we know that humanity is far more than mere intelligence, since we are made in the very image of God.  Subsequently I cannot see AI developing consciousness or self-awareness, although the illusion of it is certainly going to become more real. We will always remain ontologically superior to that which we create[19] Algorithms will be able to imitate, but will never be able to offer Biblical wisdom, which is the application of God’s Word to our lives, which requires lived experience in communion with God.

What is the fear behind the development of AI?  There seems to be anxiety over our worth as human beings.  If AI can replace our jobs, of what value will we be?  After all, if intelligence and rationality was supposed to separate us from the animals, and if my existence with my tastes and preferences can be reduced to an algorithm, then do I have any significance as a human being?

But one pivotal aspect of being human which AI perpetually violates is the issue of privacy.  As it stands, we all, through the use of popular search engines and social media, are trading information for convenience.  I want to be able to find information and share my life with close friends, but is it ethical to then commodify me (and my friends through me) and develop data about me to then be sold to other corporations so that they can target me with their products?  The Theos report “Data and Dignity: Why Privacy matters in the Digital Age” delves into this issue and makes the case for defending the importance of privacy as a way of loving our neighbour in a digital age.  In the report Nathan beautifully reminds us of our dignity as humans, and that as human beings made in God’s image we are “embodied (with limits and susceptibilities to be honoured rather than violated for gain); relational (made for relationships of trust and mutual care rather than exploitation); agential (with a capacity for intentional action to be upheld rather than undermined)”

The development of AI, and in light of what we already experience, certainly requires self-discipleship.  The temptation will be to desire quick answers to life’s difficult experiences.  The desire will be there to place our trust in the easy comforts and answers that AI will offer as opposed to trusting in our Unseen God.  Given how AI has a knack for feeding distraction, taking a fast from social media for example could arguably be one of the most enlightening disciplines we undertake in our walk with God.

Perhaps the questions we need to be asking are more fundamental: What is AI wanting to achieve?  If it wants to replace God and elevate the self to utter superiority, it will fail.

If it’s wanting to solve the problem of happiness and death (as per Yuval Harari), then, as Prof John Lennox so beautifully put it: you’re already too late.  Jesus solved that problem through Calvary and His resurrection.

Christianity offers a vision of humanity that so desperately needs to be recovered: we are image bearers of our Trinity God and subsequently need to treat one another with dignity and respect.  We know that:

In the beginning was the Word, and the Word was with God, and the Word was God.  He was in the beginning with God.  All things were made through him, and without him was not anything made that was made.  In him was life, and the life was the light of men…And the Word became flesh and dwelt among us, and we have seen his glory, glory as of the only Son from the Father, full of grace and truth (John 1:1-4 &14)

Perhaps AI is not only a parody of the human quest for godhood, but a desire to be connected to the Original Code (Logos), which, as we know, is not a code or rational force at all, but our Incarnational God.

For all the human envisioning of our AI future, both utopian and dystopian, one thing is for certain: we can use AI as a tool to encourage others to get connected to I AM.  But I’m convinced that the further along the AI road we go (and yes, it will undoubtedly be a tool of 666), the need for human connectivity is going to become even stronger.

What an opportunity for the body of Christ.

Postscript: I discovered a ‘heroine’ in AI through my research, and I want to introduce her to you.  Her name is Rosalind Wright Picard, and she is a former atheist-turned Christian and is on the forefront of AI research, and a professor at MIT.  You will find interviews with her online.   I also recommend Prof John Lennox whose insights helped me understand this field more.

 

 

 

References

[1] Book accessed via Scribd

[2] Tinker, Melvin  2018  That Hideous strength: How the West was Lost (EP Books) p25

[3] Wyatt & Williams p292

[4] Zuboff, Shoshana (2019)  The Age of Surveillance Capitalism: The fight for a Human Future at the New Frontier of Power p23

[5] https://en.wikipedia.org/wiki/Pegasus_(spyware)

[6] Wikipedia

[7] Wyatt, J & Williams, S (Editors) (2021)  The Robot will see you now: Artificial Intelligence and the Christian Faith (SPCK Publishing) p19

[8] https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1

[9] https://neuralink.com/approach/

[10] Terrence J. Sejnowski; Large Language Models and the Reverse Turing Test. Neural Comput 2023; 35 (3): 309–342. doi: https://doi.org/10.1162/neco_a_01563

[11] https://www.ibm.com/topics/artificial-intelligence

[12] Wyatt, J & Williams, S (Editors) (2021)  The Robot will see you now: Artificial Intelligence and the Christian Faith (SPCK Publishing) p21

[13] Ibid.p31

[14] Ibid p101

[15] Ibid p101.  See also https://en.wikipedia.org/wiki/Life_3.0

[16] https://en.wikipedia.org/wiki/Homo_Deus:_A_Brief_History_of_Tomorrow

[17] ibid

[18] Mladin, Nathan (2023)  “Data and Dignity: Why Privacy matters in the Digital Age” – A Theos publication

[19] Rosalind Picard, from an interview I listened to.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Privacy Preference Center

Necessary

Advertising

Analytics

Other