AI hype vs cyber security: are you ready to protect your data?
Across every sector, AI use is growing faster than governance frameworks can keep up. Employees are already using AI to analyse data, write reports or brainstorm ideas, often with little oversight.
This ‘shadow AI’ is creating a new layer of complexity for IT and compliance teams. We regularly see organisations drafting AI policies while still relying on outdated systems, weak access controls and limited visibility over where their data actually lives.
The irony is that many of the risks people now associate with AI already exist inside their current platforms. As well as worrying about what staff might feed into ChatGPT, it’s worth asking: how well is your business protecting the data it already holds?
Concern about AI is understandable, but it’s not the whole picture
Data privacy within AI tools is a valid concern. Businesses need to know where their information goes, how it is stored and whether it could be used to train third-party models. But focusing solely on AI privacy can obscure a more immediate problem: most breaches still happen through the same old weaknesses.
It is also worth noting the inherent risk in adopting solutions that are still in their infancy, so if you are considering AI tools don’t forget to consider some key questions including;
How mature is the vendor organisation? If they need to pivot away from the functionality we are using, how exposed will our business be?
What assurances do they provide in terms of copyright, privacy, and legal protection from the use of their product?
What safeguards do they have in place to prevent or mitigate the risk of bias, hallucinations and inaccurate data processing?
Are our team adequately trained to use the solution productively, and to apply critical thinking and quality control for the output?
None of these challenges are totally new in IT management, though AI certainly increases them.
The New Zealand National Cyber Security Centre (NCSC) reported 1,315 incidents between April and June 2025. Scams and fraud made up the largest share, followed by phishing and credential harvesting. Total reported financial losses were $5.7 million, down from the previous quarter, but notably, just 50 incidents accounted for 94 percent of that loss.
These figures show that most attacks are low-level and persistent, but the small number that slip through cause real damage. And they rarely involve data exposed through AI platforms. They stem from social engineering, human error, outdated systems, and unpatched vulnerabilities.
When AI meets weak foundations
With that being said, adopting AI without solid security hygiene is like building an extension on an unstable house. The technology might look impressive, but the structure beneath can’t support it.
Common oversights we see include:
Insufficient governance of which new system are adopted and how they are used
Legacy Systems left unpatched for months or years
Shared or outdated credentials with broad access rights
Little or no data classification, so no one really knows what’s sensitive
Minimal planning for how to respond if something goes wrong
AI doesn’t cause these problems, but it amplifies them. More data moving between systems means more opportunities for exposure.
Where to start
AI can strengthen your organisation, but only if your security posture is already sound. Before exploring advanced tools or integrations, focus on these essentials:
Patch and update consistently. Known vulnerabilities remain the most common entry point for attackers.
Use multi-factor authentication and strong access controls. Limit who can reach what.
You can’t protect what you don’t know exists – create an asset register of your critical systems, noting what data they hold and what happens if they fail (We can assist you with this)
Classify your data. Identify what’s confidential, and make sure it’s handled accordingly.
Have an incident response plan. Even small businesses need a clear process for containing and reporting breaches.
Audit regularly. Review permissions, configurations, and user activity.
AI isn’t the only threat; weak systems are. The rush to explore new technology has outpaced many organisations’ ability to secure what they already have. At Brightly, we help businesses strengthen these foundations through our Security Baseline Review and ongoing cyber resilience support.
If you’re serious about using AI safely, start with the basics: know your data, secure your systems, and build resilience first. Get in touch with us to book a review to see where your organisation stands.