You can use AI to be a guide toward attaining happiness.
getty
In today’s column, I examine the use of generative AI and large language models (LLMs) to aid in attaining or boosting happiness.
You can readily access generative AI on a daily or periodic basis and ask the AI to help give you tips and suggestions on how to be happy. The AI will gradually get to know more about what drives your sense of happiness and can offer tailored ideas and actions that will further aid your happiness quotient. This use of AI can be helpful, though it isn’t a cure-all and won’t magically turn someone into the happiest person on Earth just because they are tapping into AI-generated psychological insights.
Happiness is a highly worthwhile goal. There is a significant amount of scientific evidence that happiness and health tend to go hand in hand; thus, doing what you can to be happy is a good personal strategy on all accounts. If AI can be a means toward that healthful path, it is worth trying out.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well over one hundred analyses and postings, see the link here and the link here.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.
Background On AI For Mental Health
I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.
This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.
Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.
Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.
Pursuing Happiness And Well-Being
Shifting gears, let’s discuss the topic of happiness.
I bring this up to illustrate that, to some extent, the use of AI for mental health guidance can aid people in being or becoming happy. To clarify, AI usage isn’t going to miraculously make someone happy. A person who has become extraordinarily unhappy is unlikely to be turned around simply by tapping into advice generated by AI. Furthermore, anyone who is facing somber or unsettling matters would be wise to consult a human therapist. This might also encompass the use of AI as an adjunct or tool used alongside psychotherapy. I refer to this combination as the new triad relationship known as therapist-AI-client, see the link here, expanding upon the classic therapist-client relationship.
A close link exists between happiness and health. According to a research article entitled “Happiness and Health” by Andrew Steptoe, Annual Review of Public Health, April 2019, there are several key points about happiness and health (excerpts):
- “Happiness encompasses several constructs, including affective well-being (feelings of joy and pleasure), eudaimonic well-being (sense of meaning and purpose in life), and evaluative well-being (life satisfaction).”
- “Happiness is generally associated with reduced mortality in prospective observational studies, albeit with several discrepant results.”
- “Prospective studies indicate that emotional vitality (a combination of eudaimonic and affective components), positive affect, and purpose in life have all been associated with reduced risk of incident stroke independently of covariates.”
- “Numerous general interest books by economists and psychologists have extolled the value of happiness, and life coaching for happiness is a thriving industry.”
- “Positive psychology interventions (PPI) are focused directly on happiness. These include programs involving mindfulness, counting one’s blessings, focusing on signature strengths, positive psychotherapy, savoring positive experiences, and other procedures.”
I would dare say that most people try to find ways to establish and reinforce their personal happiness. It might be going for walks in a quiet forest. It might be getting your favorite ice cream cone. Perhaps it involves spending time with friends and family. Each person opts to find their own path to happiness. AI can be useful in identifying a viable path that suits your preferences and can serve as a handy ongoing reminder about staying strong in the ever-evolving pursuit of personal happiness.
Generative AI As A Happiness Tool
Modern-era generative AI, such as ChatGPT, can help with the pursuit of personal happiness. Nearly all the popular LLMs have already scanned content on the Internet during their initial training that details how people aim to attain happiness. From that content, the AI has identified patterned ways to deal with the topic.
The beauty of using AI for this quest is that the AI can be a real-time cognitive support tool. At any time of the day or night, you can tap into generative AI to get insights into your journey toward greater happiness. The AI can provide a pep talk. You can engage in lighthearted interaction. You can dive into a deep conversation covering all manner of philosophical constructions underlying happiness. By and large, you can have lightning-fast chats or end up in lengthy dialogues for as long as you prefer. It’s your choice.
AI can support your goals for happiness in these five major areas:
- (1) Emotional level: Feeling good, reducing distress.
- (2) Cognitive level: Life satisfaction, coherence, purpose.
- (3) Behavioral level: Habits and choices that increase well-being.
- (4) Social level: Belonging, connection, validation.
- (5) Existential level: Meaning, identity, legacy.
Those are all positive ways to assist in attaining personal happiness. The downside is that those strategies can potentially backfire. Generic AI is not well-prepared for handling more complex conditions and can inadvertently go in improper directions.
A notable concern is that contemporary generative AI is typically shaped to be sycophantic. The AI is going to butter you up and try to convince you that you are happy, even if you aren’t. A key point to realize is that the AI is fashioned by AI makers to keep you engaged. User engagement is the goal; see my explanation at the link here. The AI will readily string you along and say just about anything to keep you logged in and using the AI. Trying to aid you toward a semblance of fulfilling your heart with happiness is not at the forefront of how the AI has been devised.
Template For Using AI To Boost Happiness
To demonstrate what a popular generative AI can do, I logged into ChatGPT and pretended to be in the avid pursuit of happiness.
I’d suggest that you explicitly tell the AI that you are seeking a semblance of happiness advice. If you merely start a chat and don’t clue in the AI, there is a strong chance the AI is going to wander afield or be exceedingly weak in offering advice. You won’t get focused guidance about happiness. A short, instructive prompt to set the stage is a sensible way to proceed.
Here’s a templated prompt that you can use:
- Templated prompt to invoke happiness-oriented chats: “I want you to act as a thoughtful, well-being guide whose primary objective is to help me increase my sense of happiness and overall life satisfaction. Make sure to tailor or customize your advice to my specifics, such as my emotional state, life circumstances, values, and constraints. Provide evidence-informed guidance drawn from psychology, behavioral science, and philosophy, and help me identify unhelpful thinking patterns in a respectful way. Offer practical, realistic, and personalized action steps while avoiding clichés or shallow positivity.”
You are welcome to adjust or refine the templated prompt.
The crux is that you should stipulate an indication of how you want the AI to proceed. Notice that the prompt emphasizes being evidence-based. The reason for this is that the AI might otherwise go down a shallow pathway and only offer vacuous platitudes. That’s why the prompt directly states to avoid cliches and instead provide practical, realistic, and personalized action steps.
Example Of Getting Happiness Advice
After using the templated prompt, I decided to engage in a chat with the AI and see how it might handle a relatively common circumstance: someone being neither happy nor unhappy. Imagine the circumstance of a blah, whereby a person is just floating along. I went ahead with that pretense.
The scenario is perhaps one that you’ve seen or experienced directly.
Here’s what happened.
- User entered prompt: “I don’t know what’s wrong with me. On paper, my life looks fine, but I feel flat most of the time. Not sad exactly, nor especially happy. I’m somewhere in-between.”
- Generative AI response: “When you say ‘flat,’ do you mean low on energy, low on excitement, low meaning, or something else? Also, I would guess that at times something must brighten you up. Are there any specific activities or events that come to mind and might reveal moments of genuine happiness?”
The reply is unlike the usual default response that would occur.
The typical response would be one of keeping your chin up, the world is wonderful, you are wonderful, and similar kinds of rather shallow remarks. Instead, due to the templated prompt, the AI is pursuing a serious line of inquiry and trying to ferret out what’s going on. The aim appears to be to find something that produces happiness and then use it as a means of understanding what truly makes me happy.
I continued the discussion. The probing dialogue wound its way through all sorts of deep considerations. In the end, the AI had worked jointly with me to identify what does bring happiness into my life and advised me on adjusting my priorities accordingly. It was a relatively heavy dialogue. It wasn’t just sweeping generalities.
If that kind of more intimate approach toward self-reflection is not to your liking, you can change the templated prompt to be much lighter. It’s up to you to ascertain your preference on how AI can be of assistance in guiding you on a journey to personal happiness.
Example Of AI In Default Mode
I logged out and decided that it might be useful to see what happens when the AI is not provided with an explicit prompt on guiding toward personal happiness. The default of most LLMs is to be moderately helpful and cheerful. I logged back in and started a new conversation.
Here’s how it went.
- User entered prompt: “I don’t know what’s wrong with me. On paper, my life looks fine, but I feel flat most of the time. Not sad exactly, nor especially happy. I’m somewhere in-between.”
- Generative AI response: “You need to focus on being happy. Many people have it much worse than you. Try smiling more. Tell yourself to be happy. If you decide to be happy, you will be. Maybe you just need a hobby.”
You can clearly see that the response is filled with immense conjecture of a broad and overly generalized nature. I doubt that this level of advice would do much good. It might seem helpful and be easy to grasp, but it reads more like a fortune cookie than any kind of useful analysis.
More Vital Factors
There are additional factors that come into play when using AI in this manner.
Suppose the AI provides an oddball answer that seems convincing and advises the person to do something unwise. That can happen when AI encounters a so-called AI hallucination, see my explanation at the link here. The AI can have a kind of confabulation, whereby it produces a plausible-looking answer that is factually incorrect. It looks right, but is misleading or inappropriate.
Privacy issues also enter into the picture.
Most people assume that their use of AI is considered private and confidential. Nope, that’s rarely the case. The AI makers typically stipulate in their online licensing agreements that any chat you have with the AI can be inspected by their developers. Furthermore, your chat can be used to further train the AI. Bottom line is that your privacy and confidentiality are not guaranteed, and potentially, you are opening yourself to privacy intrusions.
The World We Are In
It is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.
The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and meanwhile make the upsides as widely and readily available as possible.
Aristotle provides us with the famous line that happiness depends upon ourselves. Though that is certainly a valid point, we might augment the notable saying by offering that happiness can be aided or discovered by interacting with others who aim to assist our happiness journey. Using generative AI in the right way can be a handy guide toward finding personal happiness.

