Artificial intelligence in business: an economic security issue before it is a technological one
Artificial intelligence has established itself in just a few months as a central tool for transforming organizations. Task automation, faster decision-making processes, drafting assistance, complex data analysis: the promises are many and, for many companies, already tangible. However, this rapid adoption has been accompanied by a quieter phenomenon—yet potentially far more structural: the use of AI tools without a real governance framework, without clear control over data flows, and sometimes without genuine awareness of the implications for security and sovereignty.It is in this context that a recent note from the French Directorate-General for Internal Security (DGSI), reported in particular by Le Figaro, has drawn attention. French domestic intelligence—usually associated with counter-terrorism or foreign interference—deemed it necessary to warn businesses about the risks of uncontrolled AI use. A strong signal. Because when the DGSI speaks out on an issue, it is not to comment on a technological trend, but to point to a matter of national economic security.
This position marks a significant shift: artificial intelligence is no longer only a performance tool; it is becoming a strategic subject, on a par with protecting information assets, digital sovereignty, and the competitiveness of French companies.
An institutional warning about loss of control
In its note, the DGSI fully acknowledges the productivity gains and opportunities offered by AI. It does not challenge the value of these technologies, but highlights the possible drifts when their use escapes any framework. Among the examples cited are practices that are now very widespread: employees copy-pasting internal documents into public AI tools without managerial validation, sometimes without even assessing the sensitivity of the information being shared.
The risk is twofold. First, data leaves the company’s secure environment to be processed by external infrastructures, often located outside the European Union. Second, it can be stored, analyzed, and even reused under conditions that are completely beyond the organization’s control. Some platforms state that data may be used to improve their models; others remain deliberately vague. In all cases, the company loses control over what it hands over.
The DGSI also warns about another drift: decision-making dependence. In one case mentioned, a company relied exclusively on AI to assess its business partners and steer strategic choices, without carrying out any additional checks. This situation illustrates a gradual—but worrying—shift in which a decision-support tool becomes a substitute for human analysis. Yet, as the note reminds us, AI does not produce truths, but statistically plausible outputs. It can be wrong, hallucinate, and generate false or incomplete information—sometimes with a high level of apparent credibility.
Finally, the DGSI refers to the rise of malicious uses of AI, including “deepfakes” capable of imitating a voice or appearance with such realism that some fraud attempts become almost undetectable. In this context, AI becomes not only a productive tool, but also a potential vector for manipulation, interference, and threats to business integrity.
The real issue: data sovereignty
What this warning reveals—beyond the concrete examples—is a deeper problem: AI is becoming a strategic layer of corporate information systems, without companies always having rethought their data governance accordingly.
Historically, companies have learned to secure their servers, networks, and databases. They know where their information is stored, who can access it, and under what conditions. AI, by contrast, introduces a new kind of flow: data is no longer only stored or exchanged—it is ingested, analyzed, reformulated, and sometimes integrated into models whose internal workings are opaque.
Using public AI therefore amounts to delegating part of the processing of strategic information to a third party, without always having a clear view of:
• the exact location of the servers,
• how long the data is retained,
• whether it may be reused,
• which laws apply, including extraterritorial legislation.
For a company, this is not only a legal or regulatory risk. It is a competitiveness issue. Internal data, contractual documents, commercial strategies, tender responses, or ways of working constitute an intangible asset of considerable value. Exposing them—even inadvertently—weakens the company’s position in its market.
AI and tenders: a critical use case for both buyers and bidders
Tenders are among the most sensitive areas for the use of artificial intelligence, because they concentrate strategic information for all stakeholders. The risk is not limited to the companies responding to tenders. It equally concerns public and private buyers who issue them and evaluate them.
On the buyer side, a tender dossier contains particularly sensitive elements:
• sometimes confidential operational needs,
• strategic orientations,
• budget constraints,
• internal evaluation criteria,
• organizational and technical trade-offs.
Using an unsecured AI to analyze, rephrase, or structure these documents can expose information that directly relates to the organization’s procurement strategy and industrial policy. In some sectors, this data may carry economic, competitive, or even geopolitical value.
On the bidder side, the risks are just as high. A tender response reveals:
• cost structure,
• positioning strategy,
• production or service delivery methods,
• internal organization,
• the company’s differentiating factors.
Entrusting this data to public AI is equivalent to exposing the very essence of its competitive advantage. It can weaken the company’s competitiveness, but also call into question the fairness and integrity of the procurement process.
In both cases, the issue is the same: AI becomes a focal point for information risk. It does not process only administrative data, but elements that directly shape economic, contractual, and strategic decisions.
Applying AI to tenders without a secure architecture therefore means moving a critical process into an environment whose rules, infrastructures, and potential secondary uses the company does not control. This is precisely what makes this use case particularly sensitive—and what justifies a higher level of security than for most other AI applications in business.
A technological response aligned with security requirements
This is precisely why certain solutions have been designed from the outset around sovereignty and control. At Specgen, data security has never been considered an optional feature, but a prerequisite for any use of AI in the tender domain.
Two deployment models are offered.
The first relies on fully on-premises installations. In this case, the entire platform and the AI models are deployed on the client’s servers. The AI operates on the intranet, with no communication with the outside. Data remains physically and logically under the company’s exclusive control. This model meets the strictest confidentiality requirements, especially for organizations subject to high regulatory constraints or handling highly sensitive information.
The second model is based on a highly secured private cloud. The infrastructure is dedicated, hosting is controlled, data is never used to train the models, and the architecture is designed to meet the security standards expected by institutions such as ANSSI. This is not access to public AI, but a controlled environment—contractually governed and technically isolated.
In both cases, the logic is the same: no dependence on public AI tools, no uncontrolled data pooling, no opacity about processing.
A fundamental design principle is also added: no “autopilot.” AI never acts autonomously. It assists, analyzes, and proposes—but the decision remains human. This approach ensures the tool strengthens teams’ expertise without ever replacing it.
Rethinking artificial intelligence
The DGSI alert should not be interpreted as a challenge to AI. On the contrary, it is an invitation to rethink how it is used. Artificial intelligence is neither inherently dangerous nor inherently virtuous. Everything depends on the chosen architecture, the governance framework put in place, and the level of control retained by the company.
In the long run, the real dividing line will not be between companies that use AI and those that do not, but between those that use it without sovereignty and those that integrate it into a controlled strategy. The first category faces growing risks. The second turns AI into a lasting competitive advantage.
This shift in perspective is essential. It allows us to move beyond a binary view opposing innovation and security. The goal is not to slow innovation, but to embed it in a framework that protects economic interests, data confidentiality, and the digital sovereignty of French companies.
In this context, AI must not become a factor of dependence. It must become a strategic tool—chosen, governed, and controlled.
