About Us Contact Us Privacy Policy
© GOOD Worldwide Inc. All Rights Reserved.

Company's attempt to use AI bot to help employees with queries turns out to be a laugh riot

The company laid off many employees and bought an expensive AI model to help other workers, but in an epic turn of events, it started 'hallucinating.'

Company's attempt to use AI bot to help employees with queries turns out to be a laugh riot
Representative Cover Image Source: Pexels| Negative Space; Reddit| r/mildlyinfuriating

Artificial Intelligence (AI), which has now ventured into many fields, is already being implemented in workplaces. From automated candidate screening to managing customer relationships, AI has helped make corporate work easy. Of late, companies have been joining the trend of hiring AI models to do specific jobs, including answering internal questions for employees.

 Representational Image Source - Pexels I Photo by Sanket Mishra
Representational Image Source - Pexels I Photo by Sanket Mishra

A post on the subreddit r/mildlyinfuriating, which has since been deleted, caught the attention of the internet because of the way the AI bot responded to workers' queries. In the post titled "Our company introduced AI helpers that hate their lives," an employee stated that their company purchased an expensive ChatGPT-based AI chat model to answer internal questions. The company was "very specific in what they gave to it," so there were a lot of instances where the model responded by saying it "wasn’t sure" about the results.

Shortly after, when workers would ask questions, the AI model would tell them, "I'm not sure, Google it.” And that was not even the end. According to the employee, the AI model resorted to "making stuff up." It started "mismatching samples of different documents and just ignoring the original questions" asked by workers. The post mentioned that the company spent thousands of dollars and relied heavily on the AI model to justify the layoffs. However, now the AI bot "hates its life and is just making stuff up so people will leave it alone."


When the employee talked with the engineers in charge of the model, they explained that the model has a sample of 5-10 top-rated documents on which it bases its answers. If none of them actively have a solution, it starts "hallucinating" and "mismatching documentation pieces together." One of the most glaring problems highlighted is that the issues emerge in the model due to the way the questions are being asked. The employee added that their company has workers from all over the world, so it’s possible that some employees aren’t forming proper questions in English, hence creating this confusion. 

The post has prompted amusing reactions from people on the platform. Most comments have expressed the funny side to this situation and only a handful of people were interested in the malfunction of the AI model. One user, u/Furdiburd10, commented, "So, the company hated paying for people doing support stuff so got an AI to do it but AI didn't like it and started getting more and more depressed. That AI needs some vacation." 

Image Source : Reddit I u/saywhat252525
Image Source: Reddit I u/saywhat252525
Image source: Reddit | u/Signal_This
Image source: Reddit | u/Signal_This

"They're setting up AI to handle all front-end stuff. At the end of the week, you're supposed to connect to another AI bot and tell them how your work week was, and any concerns, or issues that stopped you from working optimally. When you contact HR, you have to see if any of your issues can be handled via their AI bot. Anything it can't, it sends a ticket to a human. It's such a joke," wrote u/Kajiic. Going by the incident, it is safe to say that in the coming years, we can expect AI to change the job market completely.

More Stories on Good