Abhinav Mittal, Author and CXO Advisor

Abhinav Mittal is an IT Cost Reduction Expert and Independent Technology Investments Advisor to C-Level Executives and board members of large multinational companies based out of the Middle East. As a published author he has written two books on the subject of how companies can get more out of their technology investments by applying over 200 IT cost optimization ideas discussed in his books. Abhinav, who is known as the CFO’s best friend, has delivered millions in bottom-line improvements for the companies he’s worked with. Mittal is MBA from Great Lakes, University Gold Medallist from Indraprastha University New Delhi, and has completed a leadership program from HBS. He also possesses a wide range of technical credentials in subjects including cloud computing, IT governance, and AI ethics. Abhinav, a bearer of a golden visa, resides in Dubai with his family. His mantra for Cost Optimisation is “Buy what you will use, and use what you Bought”

 

You’ve undoubtedly used or read about chatGPT at this point. It’s great for casual talks, boosts productivity, sparks ideas rapidly, and definitely reduces dependency on Google.  However, when dealing with complex world issues, I would still prefer speaking with a human with relevant experience vs solely relying on chatGPTt. Here are my three reasons for doing so

  1. Without any fact-checking, Chatgpt does a really good job of generating sentences that sound credible. Unless you already know the correct answer, there is currently no way to detect whether chatGPT is right or wrong. Not knowing the source of information can only cause you to lose faith in anything that is published digitally in the realm of the internet, which is rife with false information. Can you hold an AI accountable for spreading incorrect information and coming across as confident? Legal frameworks are in place to stop people from disseminating false information, but can you hold an AI accountable?
  2. AI-based programs are often trained on content scraped from the internet, often without permission. This creates complex issues about the training data’s ethical acquisition. Nowadays, a lot of literature is published digitally. Imagine if a publisher used AI in its extensive database of books and began to produce new versions of the works without giving the author credit. As a published author of two books, I must admit that I find this a little scary. A book produced by AI might be entertaining to read, but will it have any fresh ideas? A remixed song is only popular because someone wrote the original.
  3. Our perception of the world shifts as we get older, and our environment’s shifting social and cultural norms have a big impact on our personal values, which changes how we communicate at different stages of our life. People’s opinions of well-known personalities and authors can evolve over time. However, regardless of how much the actual person may have changed, an AI that is trained on a subset of historical communication styles will continue to mimic the historical persona. As humans, we change, but AI-trained personas might never stop seeing us as the individuals we once were.

Generative AI is still emerging, and while it has shaken a few industries (Publishing, Education, Media & Advertising, and Software Development) there is still sometime before it can be fully trusted.

Till then I would still trust a talented human with chatGPT vs only chatGPT. I was exposed to several ethical elements of AI through a program (CEET – Certified Ethical Emerging Technologist), which I finished last year. The challenging curriculum has given me a lot of new perspectives. Thanks to chatGPT, the topic of AI ethics is currently a hot topic.

Content Disclaimer

Related Articles