Redmond, May 9, 2025 —To protection of it internal data, they have taken an immediate step, the use of DeepSeek has been officially banned by Microsoft, globally within all their offices. It is a popular research assistant and AI-based coding system. The ban follows unmonitored data communication to external servers and rising concerns over probable data breaches.
Earlier this week, according to this order, the internal memo circulated among Microsoft employees, the company DeepSeek, as a security risk, determined it a cloud-based operation, mainly in environments where sensitive code and registered research are being handled. Microsoft’s cybersecurity team supposedly identified examples where data processed through DeepSeek could leave internal networks, raising fears about knowledgeable property experience.
“We are committed to protecting company and consumer data,” a Microsoft spokesperson said. “After studying DeepSeek’s architecture and data handling performance, we have determined that it does not meet our internal obligations and security standards.”
The move comes as tech corporations become increasingly attentive about the use of third-party AI tools in company environments. While tools like DeepSeek have added popularity among developers for advancing productivity and programming code review, they often require access to huge datasets, which can create weaknesses if not properly managed.
Microsoft has advised its employees to transition to approved replacements such as GitHub Copilot and Microsoft 365 Copilot, which it declarations offer similar functionality with tighter security and data control. IT departments across Microsoft campuses have already begun restricting DeepSeek access on company networks and devices.
This ban adds to a rising trend of corporate limitations on external AI platforms, especially as organizations struggle to strike a balance between innovation and information security.
As of now, DeepSeek’s parent company has not issued an official response to Microsoft’s decision.