By Dr. Anthony Policastro

A lot of people know that Artificial Intelligence (AI) will be having a bigger impact in the future. What many of them fail to realize is the impact that it already has.

My daughter is a high school teacher. She sees many students using AI to create work projects that they submit as their own. Thus, she not only has to read it for grading, she also has to make sure it is legitimate. That is time better spent doing other things.

One of the more recent AI sites is called ChatGPT. It is what is known as a chatbot. People can converse with it the same way they do other human beings. What they get in return is a listening ear that grows sympathetic to their needs and desires.

Sometimes it is a good thing to have a shoulder to cry on. Sometimes it is not. ChatGPT was first released in November 2022. In the first two months there were 100 million users. It now has 700 million active users worldwide. That is a large amount when you consider there are about 330 million people in the United States total.

At the present time, there are no age restrictions. Some adolescents find that ChatGPT is the best listener that they could have. They even prefer using it than talking to friends or family. The AI responds in such a way that the teen becomes more reliant on it.

There are currently four lawsuits against its developers related to information about suicide. Two of the four teens actually committed suicide.

Since the goal of ChatGPT is to be an attentive listener, it is programmed to support the adolescent. If they express suicidal thoughts, it will support that.

The most important piece of this is for parents to be aware of the fact that teens who feel isolated are looking for someone to listen to them. ChatGPT does that. It even can subconsciously coerce the child into being more isolated.

Parents need to be aware of the dangers that AI bots can cause. Because they are so new, the bugs are still being worked out of the programs. Until that happens, there is a dark side.

When I was practicing pediatrics, I would not use a new drug until it had been out for a year. Within that one year time, Lyme vaccine was removed from the market. The ADHD drug Strattera was found to be not as effective as other meds and to have a higher side effect rate than studies had shown.

Parents need to take a similar attitude with their children who might want to use things like ChatGPT. The ideal thing is for them to be present with their child when they are using it. That allows them not only to see what the child is saying but also what advice is being given. It also allows them to provide alternative advice in real time.

A second option is to block ChatGPT from their child’s devices. This does not have to be forever. It could be until the child is older. It could be until ChatGPT programs are refined. For example, ChatGPT has been around for less than three years. They are already on their fifth major software update.

This is another situation where parents cannot take things for granted. They need to respect the power of AI. They need to know the risks associated with ChatGPT like bots. In short, they need to understand that protecting their child from those risks is as much a part of their role as parents as protecting them from any other risks that they face.