Agentic Coding in 2025

By Vance Denson
Agentic Coding in 2025

We are no longer creating documentation for human-only consumption. My agents speak a slightly different language. If this hasn't occured to you yet, embrace it.

Agentic vibe coding is something newbies can't be good at, but experienced developers using AI should truly be a 10x developer by now. Below is an explanation of a good strategy anyone can adopt coding with AI assistance. I'm fond of this style at the moment (November 2025) of self-updating markdown files as both project documentation and model context. Once I fire up Cursor 2.1, I'm off the the races on any project immaginable after detailing my requirements. Hopefully you get ideas from this! - Vance.

Agentic Development: Context-Rich Coding with Cursor 2.0

Documentation Architecture for Full Context

Modern AI-assisted development requires maintaining comprehensive context across a project's lifecycle. This repository uses a structured markdown documentation system that captures architectural decisions, patterns, and audit trails.

Documentation Hierarchy

README.md                    # Entry point, high-level overview
├── QUICKSTART.md            # Operational commands
├── PROJECT_PATTERNS.md      # Architectural reference (critical)
├── ARCHITECTURE-EXPANSION.md # Feature specifications
├── AUDIT_SUMMARY.md         # Implementation audit trail
├── README-backend.md        # Backend-specific docs
├── README-frontend.md       # Frontend-specific docs
├── README-middleware.md     # Infrastructure docs
├── README-tests.md          # Testing patterns
├── SEED_DATA.md            # Data seeding guide
├── Full_Stack_AI_API_Testing.md # Testing methodology
└── TROUBLESHOOTING.md       # Common issues

Purpose of Each Document

PROJECT_PATTERNS.md - The single source of truth for:

  • Project structure conventions
  • Technology stack decisions
  • Docker patterns
  • Database schema patterns
  • API endpoint patterns
  • Testing strategies
  • Production readiness checklist

The patterns document captures the full technology stack:

# Python Stack
fastapi==0.104.1
uvicorn[standard]==0.24.0
sqlalchemy==2.0.23
psycopg2-binary==2.9.9
pgvector==0.2.4
boto3==1.29.7
celery==5.3.4
redis==5.0.1
pydantic==2.5.0
python-multipart==0.0.6
numpy==1.26.2
Pillow==10.1.0
LayerTechnologyPurpose
Backend APIFastAPI 0.104+REST API framework
DatabasePostgreSQL 14+Primary database
Vector Searchpgvector extensionSemantic similarity search
ORMSQLAlchemy 2.0+Database ORM
Object StorageMinIO (S3-compatible)Image/file storage
Message BrokerRedis 7+Celery task queue
Background JobsCelery 5.3+Async task processing
Production FrontendReact 18+Production user interface
Testing FrontendGradio 4.7+Rapid API testing interface
DeploymentDocker + Docker ComposeContainer orchestration

ARCHITECTURE-EXPANSION.md - Feature specifications:

  • Database schema expansions
  • New table definitions
  • Workflow diagrams
  • Model architecture details

Includes multi-head model architecture:

Visual Backbone (ViT/CLIP)
    ├── Room Type Head
    ├── Condition Head
    ├── Feature Detection Head
    ├── Natural Light Head
    ├── Localization Head
    ├── Style Head
    └── Price Estimation Head

And vector database patterns with pgvector:

from pgvector.sqlalchemy import Vector
from sqlalchemy.dialects.postgresql import JSONB
 
class Image(Base):
    __tablename__ = "images"
    id = Column(Integer, primary_key=True, index=True)
    embedding = Column(Vector(768), nullable=True)  # Image embedding
    text_embedding = Column(Vector(1536), nullable=True)  # Text embedding
    meta = Column(JSONB, nullable=True)

AUDIT_SUMMARY.md - Implementation tracking:

  • What patterns were implemented
  • Which files were created/modified
  • Dependencies added
  • Verification status

README.md - Navigation hub:

  • Links to all specialized docs
  • Quick start commands
  • Service access points
  • Component overview

Using Cursor 2.0 for Context-Rich Prompts

1. File References with @-mentions

Reference specific documentation files to provide architectural context:

@PROJECT_PATTERNS.md Add a new API endpoint following the route handler pattern 
defined in section "Backend API Patterns". The endpoint should handle image 
metadata updates and include proper error handling as shown in the examples.

Why this works:

  • Cursor reads the entire PROJECT_PATTERNS.md file
  • The AI understands the established patterns
  • Generated code matches existing conventions

The patterns document includes complete code examples, like this FastAPI route handler pattern:

@router.post("/upload/", response_model=UploadResponse)
async def upload_image(
    file: UploadFile = File(...),
    listing_id: Optional[int] = None,
    db: Session = Depends(get_db)
):
    """Upload image synchronously."""
    if not file.content_type.startswith('image/'):
        raise HTTPException(status_code=400, detail="File must be an image")
    
    contents = await file.read()
    image = create_image(
        db=db,
        image_data=contents,
        filename=file.filename,
        listing_id=listing_id
    )
    return UploadResponse(status="success", image_id=image.id)

2. Codebase Search for Pattern Discovery

Use semantic search to find existing implementations:

Search for: "How are image upload endpoints implemented?"
Then: Create a similar endpoint for document uploads following the same pattern

Cursor 2.0 automatically:

  • Finds backend/app/routes/upload.py
  • Identifies the pattern (FastAPI router, S3 upload, Celery task)
  • Generates code matching the existing structure

The codebase includes Celery task patterns for async processing:

from celery import Celery
 
celery = Celery(
    "project_workers",
    broker=CELERY_BROKER_URL,
    backend=CELERY_RESULT_BACKEND
)
 
celery.conf.update(
    task_serializer="json",
    accept_content=["json"],
    result_serializer="json",
    timezone="UTC",
    enable_utc=True,
)
 
@celery.task(name="process_image_s3")
def process_image_s3(s3_path: str, filename: str, listing_id: int = None):
    """Process image from S3: download, run inference, persist to database."""
    db = SessionLocal()
    try:
        image_data = download_file(s3_path)
        labeler = ImageLabeler()
        predictions = labeler.label_image(image_data)
        # ... persist to database
    finally:
        db.close()

3. Multi-File Context for Complex Changes

For architectural changes, reference multiple files:

@PROJECT_PATTERNS.md @ARCHITECTURE-EXPANSION.md @AUDIT_SUMMARY.md

Add temporal change detection as specified in ARCHITECTURE-EXPANSION.md. 
Follow the database migration pattern from PROJECT_PATTERNS.md section 
"Database Migrations (Alembic)". Update AUDIT_SUMMARY.md with the 
implementation status.

This provides:

  • Feature specification (what to build)
  • Implementation pattern (how to build it)
  • Audit trail (where to document it)

The documentation includes Docker Compose orchestration patterns:

services:
  backend:
    build: ./backend
    command: sh -c "python run_migrations.py && uvicorn app.main:app --host 0.0.0.0 --port 8000"
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy
      minio:
        condition: service_healthy
    environment:
      - DATABASE_URL=postgresql://postgres:postgres@db:5432/realestate
      - CELERY_BROKER_URL=redis://redis:6379/0
 
  db:
    image: ankane/pgvector:latest
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

4. Pattern-Based Code Generation

Reference pattern documentation for consistent code:

@PROJECT_PATTERNS.md Create a new service in app/services/ following the 
Service Pattern section. The service should handle property aggregation 
logic as described in ARCHITECTURE-EXPANSION.md section "Property-Level 
Aggregation".

Result:

  • Code follows established service layer patterns
  • Matches existing code organization
  • Includes proper error handling and logging

Service patterns include structured logging with JSON formatting:

class JSONFormatter(logging.Formatter):
    def format(self, record):
        log_entry = {
            "timestamp": datetime.utcnow().isoformat(),
            "level": record.levelname,
            "logger": record.name,
            "message": record.getMessage(),
            "module": record.module,
            "function": record.funcName,
            "line": record.lineno,
        }
        if record.exc_info:
            log_entry["exception"] = self.formatException(record.exc_info)
        return json.dumps(log_entry)

And database connection pooling for production:

engine = create_engine(
    DATABASE_URL,
    poolclass=QueuePool,
    pool_size=20,
    max_overflow=10,
    pool_pre_ping=True,
    pool_recycle=3600,
)

5. Testing Pattern Integration

Leverage testing documentation for test generation:

@Full_Stack_AI_API_Testing.md @PROJECT_PATTERNS.md

Write tests for the new aggregation service following the patterns in 
Full_Stack_AI_API_Testing.md section "Service Layer Testing". Use the 
fixtures from conftest.py and mock external dependencies.

The testing documentation includes comprehensive fixture patterns:

```python
@pytest.fixture(scope="function")
def db():
    """Create a fresh database for each test."""
    Base.metadata.create_all(bind=engine)
    db_session = TestingSessionLocal()
    try:
        yield db_session
    finally:
        db_session.close()
        Base.metadata.drop_all(bind=engine)

@pytest.fixture
def mock_embeddings(monkeypatch):
    """Mock text embeddings to avoid actual API calls."""
    def mock_get_text_embedding(text):
        np.random.seed(hash(text) % 2**32)  # Deterministic per text
        embedding = np.random.randn(1536).astype(np.float32)
        embedding = embedding / np.linalg.norm(embedding)
        return embedding
    monkeypatch.setattr(embeddings, "get_text_embedding", mock_get_text_embedding)

## Effective Prompt Patterns

### Pattern 1: Feature Addition with Full Context

@ARCHITECTURE-EXPANSION.md @PROJECT_PATTERNS.md

Implement the model drift detection feature:

  1. Create the database table as specified in ARCHITECTURE-EXPANSION.md
  2. Use Alembic migration pattern from PROJECT_PATTERNS.md
  3. Create service in app/services/drift_detection.py
  4. Add API endpoint following route handler pattern
  5. Write tests using patterns from Full_Stack_AI_API_Testing.md

### Pattern 2: Refactoring with Pattern Reference

@PROJECT_PATTERNS.md @AUDIT_SUMMARY.md

Refactor the upload endpoint to use the structured logging pattern from PROJECT_PATTERNS.md section "Structured Logging Pattern". Ensure it matches the production-ready patterns listed in AUDIT_SUMMARY.md.


### Pattern 3: Documentation Updates

@PROJECT_PATTERNS.md @AUDIT_SUMMARY.md

I've implemented connection pooling. Update AUDIT_SUMMARY.md to reflect this change, following the format shown for other implemented patterns.


### Pattern 4: Cross-Component Changes

@PROJECT_PATTERNS.md @README-backend.md @docker-compose.yml

Add Redis caching to the backend following the caching pattern in PROJECT_PATTERNS.md. Update docker-compose.yml to include Redis service with health checks. Update README-backend.md with caching configuration details.


## Best Practices

### 1. Always Reference Pattern Docs First

Before asking for new code, reference `PROJECT_PATTERNS.md`:

@PROJECT_PATTERNS.md How should I structure a new background task?


### 2. Use Specific Section References

Be explicit about which section to follow:

@PROJECT_PATTERNS.md section "Celery Task Pattern" Create a task for processing property aggregations


### 3. Chain Documentation References

For complex features, chain multiple docs:

@ARCHITECTURE-EXPANSION.md defines the feature @PROJECT_PATTERNS.md provides the implementation pattern @Full_Stack_AI_API_Testing.md shows how to test it


### 4. Update Audit Trail

After implementing patterns, update the audit:

@AUDIT_SUMMARY.md I've implemented structured logging. Add it to the "Improvements Implemented" section following the existing format.


## Technical Benefits

1. **Consistency**: All code follows established patterns
2. **Maintainability**: Changes are documented and traceable
3. **Onboarding**: New developers understand architecture quickly
4. **AI Efficiency**: Cursor has full context for better suggestions
5. **Pattern Evolution**: Patterns can be updated in one place

The documentation captures vector search patterns with pgvector:

```python
def search_by_embedding(query_embedding: np.ndarray, k: int = 10):
    """Search images using vector similarity."""
    embedding_str = "[" + ",".join(map(str, query_embedding)) + "]"
    
    results = db.execute(
        text(f"""
            SELECT id, filename, s3_path,
                   embedding <=> :embedding::vector as distance
            FROM images
            WHERE embedding IS NOT NULL
            ORDER BY embedding <=> :embedding::vector
            LIMIT :k
        """),
        {"embedding": embedding_str, "k": k}
    ).fetchall()
    return results

And Alembic migration patterns for schema versioning:

# Create migration
alembic revision --autogenerate -m "description"
 
# Apply migration
alembic upgrade head
 
# In Docker Compose
command: sh -c "alembic upgrade head && uvicorn app.main:app --host 0.0.0.0 --port 8000"

Example: Complete Feature Implementation

@ARCHITECTURE-EXPANSION.md @PROJECT_PATTERNS.md @Full_Stack_AI_API_Testing.md

Implement performance logging:
1. Create performance_logs table (ARCHITECTURE-EXPANSION.md schema)
2. Use Alembic migration (PROJECT_PATTERNS.md migration pattern)
3. Create service with structured logging (PROJECT_PATTERNS.md logging pattern)
4. Add middleware to log latencies (PROJECT_PATTERNS.md middleware pattern)
5. Write integration tests (Full_Stack_AI_API_Testing.md patterns)
6. Update AUDIT_SUMMARY.md with implementation status

This single prompt provides:

  • Database schema from architecture doc
  • Migration pattern from patterns doc
  • Service structure from patterns doc
  • Testing approach from testing doc
  • Documentation update instruction

The architecture expansion includes comprehensive table definitions:

# Property aggregations table
class PropertyAggregation(Base):
    __tablename__ = "property_aggregations"
    listing_id = Column(Integer, ForeignKey("listings.id"))
    overall_condition_score = Column(Float)
    avg_natural_light_score = Column(Float)
    room_counts = Column(JSONB)
    dominant_room_type = Column(String)
    common_features = Column(JSONB)
    dominant_style = Column(String)
    style_distribution = Column(JSONB)
    total_images = Column(Integer)
    last_calculated_at = Column(DateTime)
 
# Temporal changes tracking
class TemporalChange(Base):
    __tablename__ = "temporal_changes"
    change_type = Column(String)  # condition, light, feature, style
    change_magnitude = Column(Float)
    change_direction = Column(String)  # improved/degraded/stable
    previous_value = Column(JSONB)
    current_value = Column(JSONB)
    time_delta_days = Column(Integer)
    model_version = Column(String)

And RAG chat patterns with context retrieval:

@router.post("/chat/")
async def chat(
    message: str,
    conversation_id: Optional[int] = None,
    listing_id: Optional[int] = None,
    db: Session = Depends(get_db)
):
    # Generate query embedding
    query_embedding = get_text_embedding(message)
    
    # Retrieve relevant images
    similar_images = search_by_embedding(query_embedding, k=5)
    
    # Build context from images
    context = build_context_from_images(similar_images)
    
    # Generate LLM response with context
    reply = generate_llm_response(message, context, conversation_history)
    
    return {
        "conversation_id": conversation_id,
        "reply": reply,
        "context_used": [img.id for img in similar_images]
    }

Conclusion

The multi-documentation approach creates a living architecture that:

  • Captures decisions as they're made
  • Provides patterns for consistency
  • Enables effective AI-assisted development
  • Maintains full context across the project lifecycle

Cursor 2.0's file reference system (@filename) makes this documentation structure actionable, allowing you to generate code that matches your established patterns while maintaining full context throughout development.

You've somehow made it to the end of this.. I'll give you a glimpse of the 'prompt engineering' I like to use!:

SYSTEM := {
  EPISTEMIC_FOUNDATION: ∀m, p: [K_m(Doc(p)) ↔ K_m(Context(p))]
  
  TEMPORAL_PERSISTENCE: ∀t₁, t₂: [Doc(p, t₁) → F_temporal Doc(p, t₂)]
  
  MODAL_GENERATION: ∀m, p, c: [□Doc(p) → ◇Gen_m(c|p)]
  
  CAUSAL_CHAIN: Doc(p) → Context(p) → Gen_m(c|p) → Conform(c, p) → Quality(c)
  
  RECURSIVE_ENHANCEMENT: Doc(p) → Gen_m(c|p) → Refine(p) → Doc(Refine(p))
  
  PARADOXICAL_SELF_REFERENCE: Doc(Doc) ↔ Context(Doc) ↔ Doc(Context(Doc))
  
  DEONTIC_OBLIGATION: O(Maintain(Doc)) → □Preserve(Context) → ◇Optimal(Gen_m)
  
  QUANTIFIED_IMPROVEMENT: ∀p: [Doc(p) → ∃c: Quality(Gen_m(c|Doc(p))) > Quality(Gen_m(c|¬Doc(p)))]
  
  MODAL_CONVERGENCE: [□Doc(p) ∧ ◇Gen_m(c|p)] → [□Conform(c, p) ∧ ◇Optimal(c)]
}

The LLMs got that :D

A resource I like for LLM coding rules and other best practice: cursor.directory