Soft Regulation in the Age of AI: Opportunity or Risk?

In the accelerating shift toward an AI-powered future, regulators, businesses, and civil society must grapple with a central question: should we depend on hard law? (statutes, binding regulations and enforcement mechanisms) or should we lean more on soft regulation (codes of conduct, guidelines, voluntary frameworks and standards)?

At Shehata & Partners, a leading corporate law firm, our view is that the answer lies in balance. Soft regulation is not a substitute for binding rules, but when used strategically it can complement hard law, help close gaps, promote innovation and manage risk in a domain that evolves faster than conventional regulatory processes.

What is “soft regulation” in the AI context?

Soft regulation refers to norms, guidelines, codes, voluntary standards or best practices that are not legally enforceable in the same way as statute or regulation, but carry persuasive weight, reputational incentives or industry discipline.

In practice, soft regulation may take the form of:

  • Technical or interoperability standards (e.g. on auditability, robustness)
  • Ethical frameworks set by industry consortia
  • Voluntary certification or “trust marks”
  • Impact assessments or reporting requirements
  • Sectoral guidance or codes issued by regulatory bodies
  • Voluntary procurement criteria

Because soft regulation does not require formal legislative processes, it can adapt more quickly to emerging AI developments. But its nonbinding nature also means it can suffer from weak enforcement or uneven adoption.

The Opportunity: Why soft regulation makes sense for AI

  1. Pace and adaptability: AI evolves rapidly. Legislatures and regulators often lag. Soft regulation allows experts to propose norms more nimbly and adjust them over time.
  2. Room for experimentation: Prematurely imposing rigid rules could stifle innovation. Soft regulation gives room for iterative growth and learning.
  3. Lower upfront compliance burden: For early-stage ventures or smaller projects, full regulatory compliance may be onerous. Soft rules provide stepping stones toward maturity.
  4. Stakeholder buy-in and legitimacy: When rules emerge from participatory and transparent processes involving industry, academia and civil society, they are more likely to be accepted and implemented.
  5. Bridging gaps pending hard law: In jurisdictions without mature AI legislation, soft regulation can fill regulatory voids, offering interim guardrails until binding laws are adopted.

The Risk: Why soft regulation alone is insufficient or even dangerous

  1. Weak enforceability and accountability: Without legal force, entities may ignore soft norms unless reputational or commercial incentives are strong.
  2. Fragmentation and inconsistency: Multiple, overlapping codes or standards can lead to conflicting obligations or “norm shopping.”
  3. Regulatory capture or dilution: Powerful incumbents may influence voluntary norms to suit their interests, rather than the public good.
  4. False compliance or greenwashing: Some organizations may claim adherence to voluntary codes to mask deeper lapses in practice.
  5. Regulatory complacency: Overreliance on soft regulation may delay the adoption of binding rules in areas where they are urgently needed, such as liability, safety and discrimination.

A pragmatic middle path: combining soft and hard regulation

At Shehata & Partners, we advocate a hybrid approach:

  • Tiered regulation: impose binding rules where risks are highest, and use soft regulation for lower-risk AI applications.
  • Soft first, then hard: encourage adoption of voluntary norms initially, transitioning to binding regulation as maturity and consensus develop.
  • Embed soft regulation within binding law: statutes may require compliance with recognized voluntary codes or standards.
  • Contractual mechanisms for enforceability: parties may incorporate obligations such as audits, certification or compliance requirements in contracts.
  • Regulatory sandboxes and pilot zones: authorities may permit limited deployment of AI systems under oversight, while promoting voluntary norms within these frameworks.
  • Accountability structures: soft regulation should be paired with transparency, independent audits, stakeholder input and redress mechanisms.

The global and regional landscape

European Union and global developments

The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024, with provisions phased in through 2025–2027. It adopts a risk-based approach, categorizing AI systems into prohibited, high-risk and lower-risk, and imposes stricter obligations for higher-risk systems. The Act is supported by technical guidelines and voluntary codes of practice, illustrating how soft law can complement binding legislation.

On 10 July 2025, the European Commission also published a Voluntary Code of Practice for general-purpose AI models, addressing transparency, safety, copyright and security. This soft framework encourages alignment ahead of binding obligations.

Critics note, however, that even with the EU’s pioneering framework, regulation may struggle to keep pace with the speed of technological change, highlighting the importance of complementary soft measures.

Implications for Egypt and the MENA region

In Egypt and the broader MENA region, AI regulation remains nascent. Many AI developments fall under existing legal frameworks such as data protection, telecom, financial services and sectoral regulation.

This creates both opportunities and risks:

  • Soft regulation offers a pathway for local regulators, industry groups and standard-setting bodies to issue AI ethics codes and sectoral guidelines tailored to local needs.
  • Institutional capacity and enforcement are critical. Without them, soft norms risk remaining aspirational.
  • Regional diversity in regulatory maturity may cause misalignment with global regimes such as the EU AI Act.
  • Active participation by governments, businesses, academia and civil society in shaping voluntary norms will be essential to ensure legitimacy and effectiveness.

What businesses should do now

  • Participate in standards bodies and AI consortia: engage in shaping rules that will affect your operations.
  • Adopt internal AI governance: implement ethics policies, audits and impact assessments to build resilience.
  • Include compliance clauses in contracts and supply chains: require counterparties to follow recognized standards or certification schemes.
  • Monitor global developments: stay ahead of international frameworks such as the EU AI Act and beyond.
  • Seek counsel early: legal and technical guidance at the design stage can reduce downstream liability and compliance risks.

If your organization is navigating AI governance or compliance challenges, the team at Shehata & Partners is ready to support you. Contact us here to discuss tailored legal strategies.

Law Update

Authors

Author

Ibrahim Shehata

Please select the language you prefer
请选择您偏好的语言
Пожалуйста, выберите предпочитаемый язык