Domain-Driven Design & GenAI
This presentation explores how Domain-Driven Design (DDD) principles can be enhanced with Generative AI to create more effective software models that mirror business needs.
Presented by Smit Unagar and Akshit Dandyan

1

Domain Model & DDD Basics
Crafting software that precisely mirrors a business domain, defining its core concepts, intricate rules, and dynamic behaviors.
Domain Model
The Domain Model is a conceptual representation of a business domain, capturing its logic, processes, and relationships. It defines key entities, rules, and behaviors.
  • Concepts: Key entities (e.g., "Parking Space", "Vehicle", "Payment").
  • Rules: Constraints and invariants (e.g., "A Parking Space cannot be occupied by more than one Vehicle at a time", "A Parking Session must have a valid Payment before completion").
  • Behaviors: Actions and processes (e.g., "Start Parking Session", "Process Payment", "Vacate Parking Space").
Domain-Driven Design (DDD)
DDD is a software development approach that models software to deeply match a business domain.
It fosters
  • a shared understanding and
  • ubiquitous language between domain experts and developers,
translating complex business scenarios into robust software solutions.
Eric Evans, "Domain-Driven Design: Tackling Complexity in the Heart of Software" (2003)

2

Strategic DDD: How We Cut the System
Dividing the business into core, supporting, and generic subdomains with clear boundaries.
Core Subdomain
Reservations & Parking Sessions
Supporting Subdomain
Payments, Notifications
Generic Subdomain
Sensors, Identity/Access
Strategic Design:
Divide business into subdomains
Focus effort on the Core Subdomain
Other subdomains: Supporting & Generic
Use Bounded Contexts → clear borders & language
Strategic DDD = focus on problem space, boundaries, and language alignment.

3

Tactical DDD: Building Blocks
Defining entities, value objects, aggregates, and events inside each bounded context.
Tactical Design
  • Works inside each bounded context
  • Main building blocks:
  • Entity → unique thing with identity
  • Value Object → defined by values, immutable
  • Aggregate → group with rules, one root entity
  • Domain Event → something that happened (past tense)
  • Repository → saves/loads aggregates
  • Service → logic not belonging to one entity
Tactical DDD = solution space; defines structure close to code.

4

From Themes → Epics → User Stories (Smart Parking)
Structuring requirements from business goals to small, testable slices.
1
2
3
1
Themes
Finding & Booking,
Using a Spot,
Money & Pricing,
Rules & Compliance,
Signals & Communication
2
Epics
Search & Discovery, Reservations & Holds, Session Start/Stop,
Extensions & Adjustments, Tariffs & Discounts,Payments & Refunds, Entitlement Checks,
Violations & Appeals, Sensing & Reconciliation, Notifications & Reporting
3
User Stories
"As a driver, I want to search nearby slots...",
"As a driver, I want to reserve a slot...",
"As a driver, I want to start a session on arrival.",
"As a driver, I want to extend my session.",
Theme = goal; Epic = large feature; Story = testable value slice.

5

Using LLMs to Generate the Domain Model
How GenAI transformed user stories into contexts, aggregates, and events.
Input
User Stories (grouped under Themes & Epics)
LLM Processing
Suggest: Bounded Contexts, Aggregates + Invariants, Domain Events
Human Review
Refine & validate
LLM Strengths
  • Fast draft of context maps
  • Suggests aggregates/events per epic
  • Helps unify business language
LLM Weaknesses
  • May invent rules not in stories
  • Blurs boundaries across themes
  • Needs expert correction
GenAI accelerates discovery, humans enforce boundaries & rules.

6

Strategic Output: Context Map (Smart Parking)
LLM-generated bounded contexts and their relationships across the system.
Bounded Contexts (from LLM draft)
  • Reservations (Finding & Booking)
  • Parking Sessions (Using a Spot)
  • Payments (Money & Pricing)
  • Enforcement (Rules & Compliance)
  • Notifications (Signals & Communication)
  • Reporting (Signals & Communication)
  • Sensing (Signals & Communication, supporting only)
Relationships
Reservations Sensing → Partnership
Parking Sessions → Payments → Customer–Supplier
Enforcement → Reservations & Sessions → Checks entitlement/overstay
Reservations / Sessions / Enforcement → Notifications → Conformist
All contexts → Reporting → Analytics feed
Context map shows how bounded contexts align with themes & interact strategically.

7

Tactical Output: Aggregates & Invariants (Smart Parking)
Aggregates enforce rules while events record what happened.
Finding & Booking → Reservations & Holds
Aggregate: Reservation
Invariant: No overlapping active reservations for same slot/time
Events: ReservationPlaced, ReservationActivated, ReservationExpired
Using a Spot → Session Lifecycle
Aggregate: ParkingSession
Invariant: End time ≥ Start time
Events: ParkingStarted, ParkingEnded, ParkingRepriced
Money & Pricing → Payments & Refunds
Aggregate: Payment
Invariant: Capture ≤ Authorized amount
Events: PaymentAuthorized, PaymentCaptured, PaymentFailed
Rules & Compliance → Violations & Appeals
Aggregate: ViolationCase
Invariant: One open violation per plate/site/time window
Events: ViolationDetected, FineAssessed, AppealSubmitted
Aggregates = guardians of business rules; events = what happened in past tense.

8

Events Suggested by LLM (AI-Assisted Event-Storming)
LLM simulated event-storming by generating domain events from stories.
Traditional DDD
  • Events discovered via event-storming workshops
  • Business + devs → sticky notes on a wall
LLM-Assisted Approach
  • Input: User Stories grouped under Themes/Epics
  • Output: candidate events (past tense)
  • Simulates event-storming automatically

Key Events (by Theme)
Finding & Booking
ReservationPlaced, ReservationExpired
Using a Spot
ParkingStarted, ParkingEnded
Money & Pricing
PaymentAuthorized, PaymentCaptured
Rules & Compliance
ViolationDetected, FineAssessed
Signals & Communication
NotificationSent, ReportGenerated
LLMs can propose domain events directly, reducing manual event-storming effort.

9

Traceability: Stories → Domain Model
Ensuring every story maps to a context, aggregate, and event.
Why Traceability Matters
Ensures Business Alignment
Every story links directly to design and implementation, ensuring business needs are met.
Supports Validation & Audit
Provides a clear, auditable path to verify requirements are satisfied and ensure compliance.
LLM-Assisted Mapping
LLMs streamline the process by mapping stories to Bounded Contexts, Aggregates, Invariants, and Domain Events.
LLM Contribution mapped stories to:
  • Bounded Contexts
  • Aggregates & Invariants
  • Domain Events
Example Flow: From Idea to Code Structure
User Story
"As a driver, I want to reserve a slot."
Bounded Context
Reservations
Aggregate
Reservation
Invariant
No overlapping reservations
Domain Events
ReservationPlaced, ReservationActivated
Traceability = evidence that every story has a place in the model.

10

Evaluation Metrics: LLM Usefulness in DDD
Measuring coverage, correctness, and speed through traceability.
Coverage
27/27 stories mapped into model
  • 22 ✔️ Correct
  • 4 🔄 Adjusted
  • 1 Wrong

Boundary Quality
  • Contexts mostly correct
  • Errors: Notifications vs. Reporting overlap
Iteration Speed
  • LLM draft: minutes
  • Manual modeling: hours/days
Defects Found
  • Hallucinated rules (e.g., drivers must prepay)
  • Misplaced aggregates (auto-end session → should be Enforcement)
Traceability Evidence
  • Mini matrix sample (Story → Context → Event → Validation)
Evaluation = speed + coverage + correctness + maintainability.

11

SWOT: GenAI for Domain-Driven Design
Analyzing strengths, weaknesses, opportunities, and threats of AI in DDD.
Strengths (S)
  • Rapid draft of contexts, aggregates, events
  • Improves coverage of stories
  • Enforces consistent naming/language
  • Reduces initial modeling time
Weaknesses (W) ⚠️
  • Hallucinates rules not in stories
  • Blurs context boundaries
  • Shallow treatment of business complexity
  • Requires constant human validation
Opportunities (O) 🚀
  • Use for automated event-storming & traceability
  • Generate acceptance tests from invariants
  • Link telemetry → model updates (adaptive DDD)
  • Scale workshops with non-technical stakeholders
Threats (T) 🔒
  • Over-reliance may weaken domain expertise
  • Risk of incorrect models if unchecked
  • Data privacy & IP leakage with LLM prompts
  • Governance gaps: who approves final model?
GenAI is an accelerator, not a replacement. Humans remain accountable for boundaries & invariants.

12

Risks & Mitigations in GenAI + DDD
Identifying risks and defining safeguards to ensure reliable outcomes.
⚠️ Risk: Hallucinated rules
🛡️ Mitigation: Require story ID grounding; reject uncited rules
⚠️ Risk: Blurred context boundaries
🛡️ Mitigation: Enforce bounded context contracts + ACL (anti-corruption layer)
⚠️ Risk: Over-reliance on AI
🛡️ Mitigation: Human validation checkpoints; domain experts approve invariants
⚠️ Risk: Data privacy & IP leakage
🛡️ Mitigation: Use on-prem / private LLMs; strip sensitive data before prompts
⚠️ Risk: Incorrect traceability
🛡️ Mitigation: Maintain traceability matrix; link every story to context/event
Mitigations keep GenAI outputs aligned with business rules & compliance.

13

Conclusion & Recommendation
GenAI is a co-pilot for DDD: fast and helpful, but humans stay in control.
Accelerated Modeling
  • LLMs significantly speed up strategic & tactical modeling.
Comprehensive Coverage
  • Achieved full story coverage (27/27) with necessary corrections.
Human Oversight is Key
  • Human validation remains essential for defining boundaries & invariants.
Recommendation
Use LLMs as accelerators in DDD workshops
Keep domain experts in the loop
Adopt traceability matrix for accountability
Explore private LLMs for safe enterprise use
GenAI = co-pilot for DDD, not an autopilot.

14

Thank You
GenAI + DDD: Better Together
LLMs can accelerate domain modeling, but human expertise remains essential for validation and refinement.

15