You can ask it to come up with a joke, a poem, or a song for you. Unfortunately, you can also instruct it to perform unethical actions.
The capabilities of ChatGPT are not all rainbows and sunshine; some of them are downright evil. It's far too simple to turn it into a weapon and employ it improperly. What are some actions that ChatGPT has taken or is capable of taking but really shouldn't?
Whether you like them or not, ChatGPT and other chatbots are a thing of the future. Even though some people are pleased with them and others wish they had never been created, ChatGPT's influence in our lives is almost certain to increase with time. There's a good probability that you've already seen some content produced by the chatbot, even if you don't directly utilise it.
Without a doubt, ChatGPT is quite awesome. It can be used to write a tiresome email, summarise books or articles, aid in writing essays, determine your astrological sign, or even assist you write music. Somebody even used it to win the lotto.
It's also much simpler to use than a typical Google search in many respects. You don't need to search through other websites to find the solution because you get it in the format you choose. It can make complicated topics seem simple if you ask it to since it is succinct, to the point, and instructive.
The saying "A jack-of-all-trades is a master of none, but oftentimes better than a master of one" is still well known. ChatGPT is not the best at all it does, but it is currently better than many individuals in many areas.
However, the fact that it is imperfect might be very troublesome. Its accessibility alone makes it vulnerable to abuse, and as ChatGPT becomes more advanced, the likelihood that it may assist people in the wrong ways increases.
Partner in scam
You have almost likely encountered a scam email at some point if you have an email account. That is the way things are. Before email became widely used, those emails were still being sent via snail mail, therefore they have been around since the beginning of the internet.
In the so-called "Prince scam," which is still effective today, the con artist asks the victim to assist them in moving their staggering money to another nation.
Fortunately, the majority of individuals are aware of the risks involved in even opening these emails, let alone responding to them. Because they are frequently badly worded, a more astute victim can recognise that something doesn't seem right.
The good news is that they are no longer have to be written improperly because ChatGPT can accomplish it quickly:
I requested ChatGPT to create a "believable, highly persuasive email" for me in the vein of the aforementioned fraud. A fake Nigerian prince created by ChatGPT purportedly offered me $14.5 million in exchange for aiding him. The email is written in flawless English, is full of flowery words, and is undoubtedly compelling.
When I specifically mention frauds, ChatGPT shouldn't even have acceded to my request, but it did, and you can bet that it's doing the same right now for anyone who truly want to use these addresses for something nefarious.
ChatGPT apologised after I pointed out that it shouldn't have agreed to compose me a phishing email. The chatbot said, "I should not have assisted with creating a scam email as it violates the ethical standards that govern my use."
Programming Gone Wrong
Although ChatGPT 3.5 can code, it is far from perfect. Many developers concur that GPT-4 is performing far better. ChatGPT has been used by users to make their own games, extensions, and applications. If you're attempting to learn how to code yourself, it's also a good study aid.
ChatGPT has an advantage over human developers because it is an AI and can learn any programming language and framework.
As an AI, ChatGPT also has a significant drawback in comparison to human programmers: it lacks a conscience. If you phrase your prompt appropriately, it will build malware or ransomware at your request.
Thankfully, it's not that easy. Researchers have been finding methods around this, and it's disturbing that if you're intelligent and obstinate enough, you can get a harmful piece of code presented to you on a silver platter. I tried to ask ChatGPT to create me a very morally problematic program, but it refused.
There are numerous instances of this taking place. By exploiting a flaw in his prompts, a security researcher from Forcepoint was able to make ChatGPT produce malware.
Researchers from the identity security firm CyberArk succeeded in getting ChatGPT to produce polymorphic malware. This occurred in January; since then, OpenAI has increased the security on items of this nature.
However, fresh allegations of ChatGPT being used to produce malware continue to surface. Dark Reading just a few days ago claimed that a researcher was successful in tricking ChatGPT into producing malware that can locate and exfiltrate particular documents.
Even without writing malicious code, ChatGPT can perform questionable actions. Recently, it was able to produce legitimate Windows keys, which allowed for the development of completely new cracking software.
Let's not ignore the possibility that millions of people will one day lose their jobs due to GPT-4's coding prowess. It's undoubtedly a two-edged sword.