With its capacity to produce material and respond to your inquiries, OpenAI’s ChatGPT has swept the globe. The scenario is now being exploited by hackers, who are perpetrating fraud. Kaspersky researchers recently found a fake ChatGPT desktop programme that includes fresh malware that can take users’ social media login credentials.

Fraud ChatGPT is stealing your data

Links to the phoney app are being shared on well-known social media sites like Facebook, Twitter, and Instagram claims a blog entry from a cybersecurity firm. Users are encouraged to download the app in several social media ads with the promise of a $50 bank account reward. The application instead downloads malware that takes the user’s data. The latest virus has been dubbed Fobo by Kaspersky (Trojan-PSW.Win64.Fobo). Researchers in cybersecurity have discovered that fraudsters have created a fake ChatGPT website that looks a lot like the real one. Users are led to the phoney website after clicking the link posted on social media. When they attempt to obtain the application, the installation procedure crashes and an error notice appears.

Even though there is an error notice, Fobo adware is being covertly set up in the background. According to Kaspersky, the malware targets login passwords and cookie data from prominent websites like Chrome, Edge, Firefox, and Brave. The user’s login information for numerous platforms, including Facebook, TikTok, and Google, especially those connected to companies, can be obtained by hackers if they get their hands on the cookies. The hackers might also obtain additional information, including the account’s ad spending and its present balance.

Researchers claim that cybercriminals are using the phoney ChatGPT desktop software to target people all over the globe. Users from Africa, Asia, Europe, and America are just some of the regions where fake client has already harmed users. Security specialist for Kaspersky, Darya Ivanova, described the Fobo trojan as follows: “An excellent illustration of how attackers are using social engineering tactics to take advantage of users’ confidence in well-known products and services in this campaign against ChatGPT. It is crucial for consumers to comprehend that just because a service seems genuine, it may not actually be. Users can defend themselves against these assaults by being educated and being cautious.”

Leave a Reply

Your email address will not be published. Required fields are marked *