Responsible AI use - my two cents

AI has been a buzzword for many years and is gaining more public attention. However the word AI is too broad to describe today's technology advancements. Architecture like Large Language Model (LLM), Large Visual Model (LVM), or even Reasoning Language Model (RLM) still doesn't meet the definition of AI, despite their (the reason I avoid “it” is that “they/them” feels more appropriate, like how many people now avoid referring to animals as “it.”) "intelligent" behaviour, which creates the illusion.

A Google Trends line chart showing global search interest for the term “AI” from 2004 to 2026. Interest remains very low and flat for most years, then rises sharply starting around 2022, peaking near 2025 before slightly declining.

So what is responsible AI use? In this post I will focus on how we, as individuals, can use AI responsibly. Although I do agree corporations are responsible to the social and environmental impact of using and promoting AI, but the consequence for individual users are more immediate.

It's no different from responsible drinking, responsible gaming, and responsible substance use, the essence is, we bear the consequence of our own AI use.

AI won't assume liability, at least for now. They are still an algorithm that requires massive computing power to predict the next word (token) in a response. Regardless of the guardrail or how carefully you write your prompt, they can still hallucinate and giving you the seemingly correct but wrong answers.

AI is bad at recalling context, at least for now. They are still struggling to memorize every word you have given, although we have developed various solutions like RAG or agentic retrieval, they don't have persistent memory and often drop the details as conversation grows.

AI can make mistakes, consider checking important information, at least for now. You may notice this under the input box on ChatGPT's interface. Training data can be outdated, imperfect query terms can be used when they search the internet on your behalf. Nevertheless, they will still give you a seemingly correct answer.

How can we be responsible? First, we should understand their limitations, and we should avoid overly relying on them. For example, it's probably fine to ask AI for a recipe, if you can bear the consequences of getting overcooked dinner.

But there could be more consequences in some industries. Imagine a vibe doctor who uploads your X-ray to an AI and instantly diagnoses you with cancer. A vibe lawyer whose entire defence strategy is generated by AI without checking. A vibe accountant instructing an AI agent to file your taxes and signing off immediately...and in a broader context, this could erode trust and distort public discussion. Posts like ChatGPT told me that, or Gemini said this is incorrect should absolutely be avoided, and commenting @grok is this true? should be disdained by society, unless you are being sarcastic.

AI generated image of four people crowd around a smartphone, staring intently at the screen. The phone displays a robot-like AI avatar with glowing eyes and a speech bubble that reads, “Is it true? @grok,” highlighting people seeking validation from an AI.

Even worse, some people fabricate facts or quote legal clauses out of nowhere just to support their claims, although this is not news, the advancement of technology certainly makes it easier. These irresponsible usage also force non-user to take on the burden of fact checking, and how can we compete with an algorithm that types faster than most humans.

Responsible AI use means the user has to assume responsibility upon using them. Just like we shouldn't drive after consuming alcohol or marijuana. We shouldn't copy and paste AI-generated answers into a textbox without fact-checking. Nor should we claim something works after followed instructions provided by an AI without knowing why. It's like "I drove fine last time after drinking, so it must be safe". When stepping into unknown realms, your judgement is already impaired—so get yourself “sober” with some fundamentals first. (oh em dash—caught you. Yeah that sentence is AI-generated, even this notes is AI-generated).

AI generates answers, but not the answer, they have never been a silver bullet, unless we eventually create an omnipotent one and maintain a good enough relationship, but that is another story. Until then, the responsibility is with us, the end users.