The launch of ChatGPT 3.5 in November 2022 sparked a significant surge in interest in AI and Large Language Models (LLMs), akin to the gold rush of 1848. This AI boom has led to widespread integration of AI into companies’ websites, products, and services to boost productivity and sales. However, it also brings numerous risks, including issues with copyrights, bias, ethics, privacy, and security, as well as the impact on jobs. Companies must regulate AI use internally, control access, and implement strict AI policies to mitigate these risks. Examples include Amazon and JPMC introducing restrictions on staff using ChatGPT. The use of AI also raises concerns about protecting intellectual property and the learning process of AI models from incorrect inputs or malicious actors. To address these challenges, companies need to adopt responsible and governed approaches to AI, potentially leveraging IT security methods like system snapshots to manage risks.