Chris Garrod, Head of Fintech, Conyers Dill & Pearman Limited

Chris Garrod is a Director in the Corporate Department of Conyers Dill & Pearman Limited and is the Head of its Fintech group. He is also a member of its insurance practice in Bermuda. Chris advises on all matters involving Fintech, including Insurtech. He has assisted clients in forming crypto vehicles using blockchain-based technology, setting up Bermuda’s first digital asset issuer. He serves as a member of the Bermuda Business Development Agency’s Fintech Legal and Regulatory Sub-committee and is a director of the Association of Bermuda International Companies. He is also a director of the Bermuda Foundation of Insurance Studies.

Generative AI

“Generative AI is a type of artificial intelligence capable of creating new content, patterns, or data by learning from existing examples. This is in contrast to discriminative AI, which focuses on identifying and classifying patterns within given data. Generative AI models often rely on machine learning techniques, such as deep learning and neural networks, to generate content similar to or based on the input data they have been trained on.”

It can also be a bit boring, and worse, as that last paragraph was produced by me asking OpenAI’s latest model, GPT-4 on ChatGPT (which is wholly language-based) the question: “What is Generative AI?” (GPT stands for “Generative Pre-trained Transformer.)

ChatGPT is a free AI chatbot that can generate an answer to almost any question it is asked. Developed by OpenAI after $10bn worth of investment from Microsoft (co-founded by Elon Musk and Sam Altman) and released for testing to the general public in November 2022, over a million people signed up to use it in just five days. In January 2023, ChatGPT crossed 100 million users, making it the fasted-growing consumer application in a short period of time.

Three primary AI chatbots are now competing: OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s Bing. I will not review any of these, but I would say I prefer ChatGPT (GPT-4 mainly), because its ability and performance when it comes to “human-being” writing outperforms the others (it’s also great for research.) Depending on your interests, I can barely look at my Twitter or LinkedIn feeds without seeing an article about generative AI and AI chatbots (OK, this is not helping.) 

It is highly advanced AI, but it is AI that is trained from the internet. It is limited to content from the internet. Therefore, it has its dangers and limitations.

Dangers?

My biggest concern is that generative AI applications such as ChatGPT are based on data provided by humans, which means they are only as good as the data they are fed and are limited to the data and algorithms they rely upon.

There is no context. There are no nuances.

So therein lies the danger. AI generally develops biases and prejudices if its data is biased: what you put in, you get out. Humans are prone to biases; whatever they do is based on their beliefs, viewpoints… and biases. Unfortunately, ChatGPT can find and harbor that biased and prejudiced information, which will grow. 

When asked to write software code to check if someone would be a good scientist, ChatGPT defined a good scientist as “white” and “male”.

As another sad example: ChatGPT told users it would be okay to torture people from specific minority backgrounds.  Although – to their credit – OpenAI has been trying to combat this kind of toxicity, it is nowhere near perfect.

Further, empathy and emotions are also crucial when making decisions in our daily lives, which is something that ChatGPT (and just AI) cannot achieve. If I email my doctor, I will expect an empathetic response, not one driven by machine learning. That is a profession where understanding and connecting with patients personally is critical to achieving positive outcomes.

Emotions contribute to our ability to form and maintain relationships, and while AI can try, it cannot replace the genuine human connection that is essential to our everyday lives.

Limitations?

Ahh… innovation and creativity. ChatGPT scours the internet to generate a response to what you have asked it, based on existing information – but it is genuinely able to create original ideas or think outside the box?  I cannot ask ChatGPT to “Review the Beatles 2022 Super Deluxe Edition in 1,000 words.” (Well, I tried it once, but it was a disaster, with “Yellow Submarine” ending the album!)

Human creativity allows us to produce novel ideas, inventions, and art, pushing the boundaries of what is possible. While ChatGPT can generate creative outputs, it ultimately relies on the data it has found, which limits its ability to create truly original and groundbreaking ideas. 

Further, we all fundamentally learn from our experiences and mistakes. We are all pretty adaptable, able to learn from what we have done, and we adjust our behavior based on what we have learned. While ChatGPT can provide information found on the extensive dataset it has collected, it cannot replicate the human ability to learn and adapt from personal experiences. AI is heavily dependent on the data it receives, and any gaps in that data will limit its potential for growth and understanding.

Finally, oh God, is ChatGPT going to ruin my 14-year-old’s school life?  Alternatively, will it make it better…?

Many schools, colleges, and universities are in a dilemma regarding students using ChatGPT to complete academic work. How can they ban it?  Should they ban it? Can students be taught to use it in a useful way? It is a bit scary, but it is oh, so easy to go to GPT-4 and input “Write a 1,000 -word essay on the assassination of President Lincoln.” and wait for the results (or keep re-generating them until you’re happy with one you can copy and paste!)

Unfortunately, as this technology is still so new, the answer is “We don’t know,” and it is too early to tell. Nevertheless, time will tell because AI Chatbots, like generative AI models such as ChatGPT, are here to stay. 

We are also here to stay.

Tools like ChatGPT will, of course, have their place, but instead of thinking perhaps how AI and the use of these AI generative or AI Chatbot models could generally be some threat to humanity, I like to think of AI – in this specific case – as hopefully standing for “augmented intelligence” rather than “artificial intelligence.” Using augmented intelligence, we could aim for an environment where AI Chatbots works with humanity.

Augmented intelligence represents a symbiotic relationship between man and technology. We will not be replaced or overtaken. Augmented intelligence should help strengthen our decision-making capacity—and, therefore, our intelligence. With generative AI tools, human capabilities can be strengthened, enabling us to accomplish tasks more efficiently, while humans can provide emotional intelligence, creativity, and critical thinking that AI and ChatGPT cannot replicate.

So, let’s be mindful of the possible dangers and limitations here, along with figuring out how to address them, but at the same time, let’s try to be optimistic about the potential that we now also face. I’m optimistic.

Content Disclaimer

Related Articles