DeepSeek in Focus
What Leaders Need to Know
By Tommy Cooke, fueled by caffeine and curiousity
Feb 14, 2025

Key Points:Â
DeepSeek is a major disruptor in the AI market, rapidly gaining adoption due to its affordability and open-source appeal Â
Despite being open-source, DeepSeek's data is stored in China, raising security, compliance, and censorship concerns Â
Organizations must weigh the benefits of open-source AI against the risks of data privacy, geopolitical scrutiny, and regulatory uncertaintyÂ
In just over a year, DeepSeek has gone from an emerging AI model to leaving an everlasting imprint on the global AI market. Developed in China as an open-source large language model (LLM), it is rapidly gaining attention. In fact, as of January 2025 it has overtaken ChatGPT as the most downloaded free app on Apple iPhones in the U.S. Â
DeepSeek's meteoric rise signals a shift in AI adoption trends and the AI industry itself, and that warrants awareness and conversation for organization leaders; as people gravitate toward alternative AI models outside the traditional Western ecosystem, it is important to understand the what, why, and how of this recent AI phenomenon. Â
As of February 2025, it is critically important to ensure that you are prepared to respond to DeepSeek in your organization. Leaders must accept the likelihood that DeepSeek is already being used by their workforce for work purposes.Â
DeepSeek is a startup based in Hangzhou city, China. It was founded in 2023 by Liang Feng, an entrepreneur who also founded the $7bn USD hedge fund group High-Flyer in 2016. In January 2025, DeepSeek released its latest AI model, DeepSeek R1. It is a free AI Chatbot that looks, feels, sounds, and responds very similarly to ChatGPT.Â
Unlike proprietary AI models developed in the West, like ChatGPT, Claude, and Gemini, DeepSeek is freely available for organizations to customize and use at their will. Part of the reason it is making waves is not only because of how quickly and easily it can be adopted and used, but also because it's significantly cheaper to build than its competitors' designs. While the exact figures are currently being debated, there is general agreement OpenAI - the company that owns, produced, and maintains ChatGPT - spent at least two to three times more to train their AI models. Â
This point is very important to understand because it explains a lot about economic fallout, the balance of global AI development, market disruption, as well as accessibility and control. The implications stretch beyond cost alone. They affect how organizations plan AI adoption, determine their budgets, and structure their technology ecosystems. If AI models can be produced at a fraction of the cost of the development norm while maintaining competitive performance, organizations must consider how this changes their long-term investment in AI. Â
Are proprietary solutions worth the price if open-source alternatives are rapidly closing the gap? As importantly, what are the hidden risks and trade-offs that come with choosing a model purely on affordability?Â
Security & Compliance Concerns with DeepSeekÂ
DeepSeek’s rapid rise comes with critical questions for organizations, especially regarding security, governance, and compliance.Â
First, DeepSeek was developed in China, and that is where its data is as well. The Western world is thus concerned about how data are processed, who has access to them, and whether companies using DeepSeek are exposing themselves to regulatory or cybersecurity risks. For organizations bound by stringent data privacy regulations, this is likely a major red flag.Â
Secondly, DeepSeek is receiving considerable criticism for its censorship policies. It will not discuss certain political topics, and it was trained on filtered datasets. This impacts the reliability of its responses and raises concerns about bias in AI-generated content. This alone, at least in part, explains why South Korea, Australia, and Taiwan have banned it.Â
Third, today's turbulent geopolitical climate means that Western governments are increasingly wary of foreign influence. AI is no exception. DeepSeek is being closely monitored by both governments and organizations around the globe, asking whether or not the company and its AI should be restricted or even outright banned.
Organizations looking for a cost-effective entry to powerful AI are certainly attracted and interested in DeepSeek - and they are considering the long-term viability and potential implications of adopting a tool in the face of regulatory and political scrutiny.Â
Is Your Staff Using DeepSeek? Guidance for Leaders Â
Given the incredible rate that AI is being installed on the personal devices of your employees - with DeepSeek clearly being no exception - there are things we feel strongly that you should consider:Â
Â
Audit your AI Usage. Find out who in your company is using chatbots, especially DeepSeek - and how. Are employees feeding sensitive data into the model? Have they uploaded business plans, client data, personal information of their patients or coworkers? Do they understand the risks? Â
Assess Risk. What do your technology and AI use policies say? Do you have them yet? Has your organization established clear policies and procedures on AI tools that store data outside your legal jurisdiction? Ask yourself, would using DeepSeek put your organization at risk of legal noncompliance or even reputational harms? Who are your external stakeholders and investors? It's critical that you start thinking about their perception, expectations, and needs. Â
Engage and Communicate. One of our clients recently told us that an executive in their organization instructed their respective staff to freely use AI chatbots at will - without discussing the decision and announcement with legal counsel. As you might imagine, this raised many concerns about understanding and mitigating AI-related risks. If you have not done so already, now is the time to articulate clearly your organization's stance on AI to employees, stakeholders, and partners. Â
Organization leaders need to be strategizing not only how they communicate to staff about AI, but they also need to be thinking about communication along the lines of organizational engagement. What are your employees thinking about AI, truly? Do they silently use it to keep up with the creative or time-consuming demands of their job? Are they afraid you will find out and will be punished? Do they feel supported by you, and are they willing to provide honest feedback?Â
How Open is Open-source AI? Â
Industry observers are debating how and whether DeepSeek’s biggest strength is also its biggest risk: it is open-source. What that means is that companies can see, download, and edit the AI's code. This opens interesting and valuable doors to many users and organizations. Â
For example, openly readable code means that its openly verifiable, and openly scrutinized. If something exists in the code that can be deemed as a fatal flaw, security concern, or a path toward bias and harmful inference or decision-making, it can be detected more easily because the global community of talented volunteer programmers and engineers can find and address any such issues. In theory, this means that managing security, compliance, and governance yields more flexible and transparent control. Unlike a proprietary AI vendor, who does not disclose their code and invite public gaze into its design, if something goes wrong – it is often your problem to address.Â
On the other hand, industry observers are also questioning how "open" DeepSeek truly is. By conventional understandings, open-source means that code is openly available for anyone to inspect, modify, and use. However, when it comes to AI, it is much more than code. AI must be trained, and training requires data. DeepSeek does not provide full transparency on what data it was trained on, nor has it been entirely forthcoming about the details of its training process. These points are important because it is forcing organizations and governments to question the transparency and trust of DeepSeek.Â
As an organizational leader, you need to ask yourself: is open-source AI considered a strategic advantage or a risk? Who controls the AI for your organization? You, or vendors outside of your jurisdiction?Â
DeepSeek is more than just another AI model. It’s a disruptor in the AI industry. Many attribute marketplace  disruption as a positive - something that challenges norms, standards, and best-in-class models. However, there is much more that is potentially disrupted here. Those disruptions are not merely globally economic and political, they are in your organization. Leaders must recognize that AI strategy is no longer just about choosing the most powerful model. It’s about choosing the right balance between control, risk, and innovation.Â