Monday 5th June 2023
Writer: Gennaro Migliaccio, Reviewer: Nicole Chesworth, Editor: Daniela Miccardi
_______________________
The first half of this year has certainly been an interesting one in the world of security. For starters, the widespread introduction of AI in the form of ChatGPT, has certainly taken the world by storm, initiating a buzz, and in some cases, a little worry on what is to come.
AI is becoming widely adopted and available to everyday users, resulting in massive adoption, but also growing security concerns. The mixed blessing is that most of these tools are open and easy to subscribe to, making it now an advanced form of Shadow IT.
Due to the rapid adoption of AI by everyday users, privacy and data protection has come under threat. A key concern we’ve identified on this is that users are “inadvertently” asking open AI tools questions that contain company/personal data - raising security issues within organisations. Businesses don’t necessarily want to outright block AI, but want to use it in a controlled and governed manner, where users are not over sharing corporate information to an open system.
A lot of organisations have updated their IT User policy to reflect AI and how it can and can’t be used in the context of their business. Due to this, the 2nd half of this year will need to be dedicated to evolution of Data Loss Prevention & Secure Web Gateway solutions - to better control and govern AI platforms, specifically those that are open. In any case, it will be exciting to see how the security market, vendors and even frameworks react to the widespread adoption of AI.
As for another prediction that relates to AI, it is likely that social engineering attacks will continue to rise as attackers leverage more AI tools to better their attacks. Right now, there are AI tools which allow for creating synthetic media (Deepfake), we already have great text to audio and audio transformation capabilities (Voice Changers) and now we have “Interactive chat” AI capabilities that can hold and produce a decent conversation. All of this coupled together is an impressive kit bag of tools for sophisticated social engineering type attacks.
Whilst I feel that most of us are now aware of what a phishing email looks like, with the above tools at the attackers’ fingertips, organisations will need to double down on training for employees to spot these even more sophisticated attacks.
In summary, AI is exciting, it is opening a more advanced way of working and being more productive. However, it’s not only the general users looking to take advantage of some of these tools and we always need to consider the security and compliance angle of utilising such tools. Finally, in terms of the security market, there is more likely to be more functionality in controlling and monitoring usage of AI within an organisation, rather than fully blocking.
Thank you for reading.
If you have any questions, or would like to learn more about any of the topics covered in this blog, please email our friendly team via [email protected].
Want to keep informed? Sign up to our Newsletter