AI-driven spam bots like AkiraBot can be quickly created by leveraging Chat GPt/OpenAI’s language models. In fact, intelligent prompting has successfully penetrated over 80K small to medium-sized business websites, flooding contact forms and chat widgets with AI-generated spam messages, a true nightmare for screen reader users and anyone else interacting with the AI generated content. These messages, promoting fraudulent SEO services, bypass traditional security measures like CAPTCHAs and network detections, posing a significant challenge to website security. AkiraBot initially targeted Shopify sites but has since expanded to include platforms like GoDaddy, Wix, and Squarespace. These platforms, while popular, have been noted to have accessibility related issues.
The AkiraBot incident highlights the potential for misuse of AI systems and the urgent need for robust regulations within the AI environment. Even platforms like Facebook have been criticized for having AI accounts, under digitally created names, user information, and even profile pics, even though most human users would prefer not to interact with an AI account. However, the ability of tools like Chat GPT to be used in the creation of harmful bots underscores the importance of controlling Large Language Model (LMS) systems to prevent digital-criminal activity. As AI technologies continue to evolve, it is crucial for global consumers to have safeguards in place to protect from such threats. The rise of AI-powered spam and other malicious activities must be carefully monitored, ensuring that innovation does not come at the expense of online safety and accessibility.