By Nishit Singh Raghuwanshi
The use of Artificial Intelligence and AI models has skyrocketed in the last couple of years. The dependability of users on AI has increased on such a major level that old-school search engines like Bing and Google have also integrated AI modes. But, everything comes at a cost and here the same could be a little concerning. A recent report published by Netcraft has claimed that the AI tools could end up delivering wrong links that could even direct the users to phishing attacks. Tools like ChatGPT and Perplexity have a history of hallucinating which could result in producing inaccurate and malicious URLs. A Brief Look At The Report According to the Netcraft report, OpenAI GPT-4.1 series AI models were asked to fetch 50 website links to log into 50 brands over industries like tech, utilities, retail, and finance. Now, the chatbot was able to deliver up to 66% correct links. However, all other links shared by the same were to harmful websites that had the capability of trapping the user in phishing scams. The same report mentioned that there are more than 17,000 AI-written GitBook phishing pages that pry on Crypto users while positioning themselves as legal support hubs or product documentation. These sites look clean and are customized for AI consumption. The whole scenario could possibly trigger a major issue where if the users trust these links and land on a malicious link, they will be prone to phishing scams. The same instance has also been recorded in the case of Perplexity where the AI model gave the link of a phishing site when asked for a URL to Wells Fargo. Other malpractices have also been witnessed by Netcraft that clearly suggest and second the statement given by OpenAI CEO a few days ago where he asked the masses to not trust AI models completely. Get Latest News live on Times Now along with Breaking News and Top Headlines from Technology Science and around the world.