The biggest boom in remote work was during the COVID pandemic, but corporations have started to call employees back into their offices, either because of prepaid office space or better control over employees' work.
Some have stuck with the remote model until now, e.g. Spotify.
When ChatGPT was at the beginning, I and most of my colleagues used it as an enhancer for our work - mostly code and research - much like a person would use Google but more direct, saving time instead of browsing more websites. We'd have a question about our work and talked to an AI expert (that wasn't that good before) to find out the answer. No more, no less.
Today, I see a LOT of people writing entire apps, writing entire essays, university thesis paragraphs, marketing messages, emails and talking to AI non-stop while working, following it blindly at this point. The worst part: sometimes it takes way longer to repair/edit/debug/update what AI produces than would have taken to even learn to do the thing yourself.
After using a lot of AI-generated code lately, I've found myself spending a lot of hours on checking and repairing a lot of easy-to-spot security flaws. That being said, AI generally sucks at actually implementing secure code (or architectures), as well as recommending what to do to make your app more secure (sometimes even decently secure).
Have you had this problem as well? If yes, how do you tackle it?