FraudGPT: Understanding the Risks of AI-Powered Chatbots in Cybercrime and Bioweapons Creation on the Dark Web

FraudGPT: Understanding the Risks of AI-Powered Chatbots in Cybercrime and Bioweapons Creation on the Dark Web

By: Javid Amin

Artificial intelligence (AI) has the potential to revolutionize many industries, but it also comes with risks. One of the most concerning risks is the potential for AI to be used for malicious purposes, such as cybercrime and bioweapons creation.

In recent months, there have been reports of AI being used for malicious purposes on the dark web. One example is FraudGPT, a bot that can be used to create cracking tools, phishing emails, and other offences. FraudGPT is reportedly priced at $200 for a monthly subscription and can go up to $1000 for six months and $1700 for a year.

Another example is WormGPT, a tool that can be used to launch sophisticated phishing and business email compromise attacks. WormGPT was advertised on many forums on the Dark Web as a blackhat alternative to GPT models, designed to carry out malicious activities.

These are just two examples of the ways in which AI is being used for malicious purposes on the dark web. As AI technology continues to develop, it is likely that we will see even more sophisticated and dangerous AI-powered tools being used for criminal purposes.

In addition to the risks posed by AI in the context of cybercrime, there are also concerns about the potential for AI to be used to create bioweapons. In a recent hearing before a US Senate technology subcommittee, Dario Amodei, the CEO of Anthropic, warned that AI systems could enable criminals to create bioweapons and other dangerous weapons in the next two to three years.

Amodei’s concerns are not unfounded. Deeper information for creating weapons of mass destruction such as nuclear bombs usually rests in classified documents and with highly specialised experts, but AI could make this information more widely available and accessible.

For example, researchers from Carnegie Mellon University in Pittsburgh and the Centre for AI Safety in San Francisco recently discovered that open-source systems can be exploited to develop jailbreaks for popular and closed AI systems. By adding certain characters at the end of prompts, they could bypass safety rules and induce chatbots to produce harmful content, hate speech, or misleading information. This points toward guardrails not being fully foolproof.

The potential for AI to be used for malicious purposes is a serious concern. It is important to be aware of the risks and to take steps to mitigate them. This includes developing regulations and safeguards to prevent AI from being used for harmful purposes.

It is also important to educate the public about the risks of AI and to raise awareness of the potential for AI to be used for malicious purposes. By taking these steps, we can help to mitigate the risks of AI and ensure that this technology is used for good.

Related posts