As with every new and emerging technology, employees have an obligation to ensure ethical and safe usage.
Unless you have been living under a rock you have likely heard of Deepseek, a Chinese artificial intelligence (AI) start-up that has created an AI chatbot similar to OpenAI’s ChatGPT. For organisations and their employees, it is a new route through which they can optimise workflow, address gaps and automate menial tasks.
However, as with any new technology, to dive in without preparation is irresponsible. Therefore, companies and their staff should ensure that they understand how best to use the tech without compromising personal or organisational security. Here’s how.
Don’t go against the grain
Recent studies, including one conducted by CybSafe and the National Cybersecurity Alliance (NCA), found that employees had used AI tools even when they were banned at work and even shared sensitive workplace information with AI technologies.
While AI technologies such as DeepSeek undoubtedly make working life easier by removing many of the boring and time-consuming aspects of people’s jobs, if an organisation has a no-AI policy then it is likely that they don’t have the infrastructure needed to combat malicious behaviours.
By using tools that a company has forbidden, you could potentially risk exposing yourself, the organisation and your fellow co-workers to cyber criminals. Simply put, if your company has banned tools such as DeepSeek, it is usually best to heed the warning.
However, organisations also need to consider modernising their practices as more and more people want to learn how to use AI-powered tools. By giving people the freedom to experiment safely and ensuring they are up to date on new technologies, companies can limit the risks and empower employees to grow their skillsets.
You’re its guardian, not its friend
Professionals planning on using DeepSeek should remember the golden rule, it is a non-sentient asset that requires supervision and should not be used in lieu of human oversight. Basically, no matter what task you give DeepSeek make sure that you fact-check for accuracy, that only those who require it have access and that nothing can be sent or issued without you first signing off on it.
No matter how much you trust technology and let’s be honest, no tech is 100pc trustworthy, you need to be incredibly careful about what you share via DeepSeek.
If you are sharing information, make sure that it is not sensitive, that the systems you are using have robust security in place and that your employer is aware of the content and technology. Employers who are unsure should create an approved list of technologies and tools, so it is clear what is and is not permissible.
Don’t be afraid to report it
Whether you are using DeepSeek’s GenAI tools with or without the permission of your employer, you still have an obligation to ensure that you don’t expose yourself and the organisation to potential harm.
If you notice malicious or biased outputs from the device, make sure to report it. While you are not actively engaged in malicious behaviours, the systems you are using may well have been trained to do so, either unintentionally or on purpose, so remaining vigilant against unhygienic cyber practices is a must. Every time you report dangerous online activity, you are helping the system become safer and more user friendly for everyone.
At the end of the day, no technology is ever going to be 100pc secure, the very nature of the internet dictates so. However, there are always steps we can take to make sure we limit the level of harm we expose ourselves to. By following these guidelines and staying aware, employees and organisations can safely integrate a range of AI and GenAI tools into their workflows.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.