File size: 12,050 Bytes
f89bd21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
# Context Provision Analysis: Intent Agent Context Flow

## Problem Statement

The Intent Recognition Agent (`src/agents/intent_agent.py`) expects a context dictionary with a `conversation_history` key, but the actual context structure provided by `EfficientContextManager` does not include this key. This results in `Available Context: []` being shown in the intent recognition prompt.

## Context Flow Trace

### Step 1: Orchestrator Retrieves Context

**Location**: `src/orchestrator_engine.py:172`

```python
context = await self._get_or_create_context(session_id, user_input, user_id)
```

This calls `context_manager.manage_context()` which returns a context dictionary.

### Step 2: Context Structure Returned by Context Manager

**Location**: `src/context_manager.py:550-579` (`_optimize_context` method)

The context manager returns the following structure:

```python
{
    "session_id": str,
    "user_id": str,
    "user_context": str,  # 500-token user persona summary
    "interaction_contexts": [  # List of interaction summary dicts
        {
            "summary": str,  # 50-token interaction summary
            "timestamp": str
        },
        ...
    ],
    "combined_context": str,  # Formatted string: "[User Context]\n...\n[Interaction Context #N]\n..."
    "preferences": dict,
    "active_tasks": list,
    "last_activity": str
}
```

### Step 3: Intent Agent Receives Context

**Location**: `src/orchestrator_engine.py:201-203`

```python
intent_result = await self.agents['intent_recognition'].execute(
    user_input=user_input,
    context=context
)
```

### Step 4: Intent Agent Attempts to Access Context

**Location**: `src/agents/intent_agent.py:109`

```python
def _build_chain_of_thought_prompt(self, user_input: str, context: Dict[str, Any]) -> str:
    return f"""
    Analyze the user's intent step by step:

    User Input: "{user_input}"
    
    Available Context: {context.get('conversation_history', [])[-2:] if context else []}
    ...
    """
```

**Issue**: The code attempts to access `context.get('conversation_history', [])`, but the context dictionary **does not contain this key**.

## Root Cause Analysis

### Expected Context Structure (by Intent Agent)
The intent agent expects:
```python
context = {
    'conversation_history': [
        {...},  # Previous conversation turn
        {...},  # Another previous turn
        ...
    ]
}
```

### Actual Context Structure (from Context Manager)
The context manager provides:
```python
context = {
    'interaction_contexts': [
        {'summary': '...', 'timestamp': '...'},
        {'summary': '...', 'timestamp': '...'},
        ...
    ],
    'user_context': '...',
    'combined_context': '...',
    ...
}
```

### Why This Mismatch Exists

1. **Historical Evolution**: The intent agent was likely designed with an earlier context structure in mind
2. **Context Manager Redesign**: The context manager was redesigned to use hierarchical summarization (50/100/500 token tiers) instead of full conversation history
3. **Missing Adaptation**: The intent agent was not updated to use the new context structure

## Impact Analysis

### First Turn (No Previous Context)
- `interaction_contexts` = `[]` (empty list)
- `user_context` = `""` (empty string if first-time user)
- **Result**: `Available Context: []` βœ“ (Correct, but for wrong reason - key doesn't exist)

### Second Turn (After First Interaction)
- `interaction_contexts` = `[{summary: "...", timestamp: "..."}]` (1 interaction)
- `user_context` = `""` or persona summary if user has history
- **Result**: `Available Context: []` βœ— (Incorrect - context exists but wrong key accessed)

### Third Turn and Beyond
- `interaction_contexts` = `[{...}, {...}, ...]` (multiple interactions)
- `user_context` = Persona summary (if user has sufficient history)
- **Result**: `Available Context: []` βœ— (Incorrect - rich context exists but not accessible)

## Context Accumulation Over Multiple Turns

### Turn 1: User says "What is machine learning?"
1. Context Manager retrieves context:
   - `interaction_contexts`: `[]` (no previous interactions)
   - `user_context`: `""` (first-time user, no persona yet)
   - Context passed to intent agent

2. Intent Agent builds prompt:
   - `context.get('conversation_history', [])[-2:]` β†’ `[]`
   - **Shows**: `Available Context: []`

3. After response, interaction context generated:
   - `generate_interaction_context()` called
   - Creates 50-token summary: "User asked about machine learning definition"
   - Stored in `interaction_contexts` table

### Turn 2: User says "How does it differ from deep learning?"
1. Context Manager retrieves context:
   - `interaction_contexts`: `[{summary: "User asked about machine learning definition", timestamp: "..."}]`
   - `user_context`: Still `""` (not enough history for persona)
   - Context passed to intent agent

2. Intent Agent builds prompt:
   - `context.get('conversation_history', [])[-2:]` β†’ `[]` (key doesn't exist!)
   - **Shows**: `Available Context: []` βœ— **SHOULD show the interaction summary**

3. After response, another interaction context generated:
   - Creates: "User asked about differences between machine learning and deep learning"
   - Stored in `interaction_contexts` table
   - Now has 2 interaction contexts

### Turn 3: User says "Can you explain neural networks?"
1. Context Manager retrieves context:
   - `interaction_contexts`: `[{summary: "...deep learning..."}, {summary: "...machine learning..."}]`
   - `user_context`: Still `""` (persona generated only after sufficient history)
   - Context passed to intent agent

2. Intent Agent builds prompt:
   - `context.get('conversation_history', [])[-2:]` β†’ `[]` (key doesn't exist!)
   - **Shows**: `Available Context: []` βœ— **SHOULD show 2 interaction summaries**

### After ~20-50 Interactions (User Persona Generation)
1. Context Manager retrieves context:
   - `interaction_contexts`: `[{...}, {...}, ...]` (up to 20 most recent)
   - `user_context`: `"User persona: Interested in AI topics, asks technical questions..."` (500-token summary)
   - Context passed to intent agent

2. Intent Agent builds prompt:
   - `context.get('conversation_history', [])[-2:]` β†’ `[]` (key doesn't exist!)
   - **Shows**: `Available Context: []` βœ— **SHOULD show rich context including user persona and interaction history**

## Available Context Data (Not Being Used)

### What Context Actually Contains

#### Turn 1:
```python
context = {
    "interaction_contexts": [],  # Empty - first turn
    "user_context": "",  # Empty - first-time user
    "combined_context": "",  # Empty
}
```

#### Turn 2:
```python
context = {
    "interaction_contexts": [
        {"summary": "User asked about machine learning definition", "timestamp": "..."}
    ],
    "user_context": "",  # Still empty
    "combined_context": "[Interaction Context #1]\nUser asked about machine learning definition",
}
```

#### Turn 3+:
```python
context = {
    "interaction_contexts": [
        {"summary": "User asked about differences between ML and DL", "timestamp": "..."},
        {"summary": "User asked about machine learning definition", "timestamp": "..."},
        # ... more interactions
    ],
    "user_context": "User persona: Interested in AI topics...",  # If sufficient history
    "combined_context": "[User Context]\nUser persona...\n\n[Interaction Context #2]\n...\n\n[Interaction Context #1]\n...",
}
```

## Recommended Solutions

### Option 1: Use `interaction_contexts` Directly (Minimal Change)

**Modify**: `src/agents/intent_agent.py:109`

```python
# OLD:
Available Context: {context.get('conversation_history', [])[-2:] if context else []}

# NEW:
Available Context: {[ic.get('summary', '') for ic in context.get('interaction_contexts', [])[-2:]] if context else []}
```

**Pros**:
- Minimal code change
- Uses actual context data
- Shows last 2 interaction summaries

**Cons**:
- Only shows summaries, not full conversation
- Loses timestamp information

### Option 2: Use `combined_context` (Preferred)

**Modify**: `src/agents/intent_agent.py:109`

```python
# OLD:
Available Context: {context.get('conversation_history', [])[-2:] if context else []}

# NEW:
Available Context: {context.get('combined_context', 'No previous context available') if context else 'No context available'}
```

**Pros**:
- Uses pre-formatted context string
- Includes both user context and interaction contexts
- More informative for intent recognition
- Better reflects the hierarchical context system

**Cons**:
- May include more than just last 2 turns (includes up to 10 interactions)
- Longer context string

### Option 3: Build Conversation History from Interaction Contexts (Most Flexible)

**Modify**: `src/agents/intent_agent.py:101-118`

```python
def _build_chain_of_thought_prompt(self, user_input: str, context: Dict[str, Any]) -> str:
    """Build Chain of Thought prompt for intent recognition"""
    
    # Extract conversation history from interaction_contexts
    conversation_history = []
    if context:
        interaction_contexts = context.get('interaction_contexts', [])
        # Get last 2 interaction summaries for context
        for ic in interaction_contexts[-2:]:
            conversation_history.append({
                'summary': ic.get('summary', ''),
                'timestamp': ic.get('timestamp', '')
            })
    
    # Optionally include user context
    user_context_summary = ""
    if context and context.get('user_context'):
        user_context_summary = f"\nUser Context: {context.get('user_context')[:200]}..."  # Truncate for brevity
    
    return f"""
    Analyze the user's intent step by step:

    User Input: "{user_input}"
    
    Previous Context: {conversation_history if conversation_history else 'No previous interactions'}
    {user_context_summary}
    
    Step 1: Identify key entities, actions, and questions in the input
    Step 2: Map to intent categories: {', '.join(self.intent_categories)}
    Step 3: Consider the conversation flow and user's likely goals
    Step 4: Assign confidence scores (0.0-1.0) for each relevant intent
    Step 5: Provide reasoning for the classification
    
    Respond with JSON format containing primary_intent, secondary_intents, confidence_scores, and reasoning_chain.
    """
```

**Pros**:
- Flexible format
- Can include both interaction history and user context
- Properly handles empty context
- More informative for LLM

**Cons**:
- More code changes
- Slightly more complex

## Current Behavior Summary

| Turn | Interaction Contexts Available | User Context Available | Intent Agent Sees |
|------|-------------------------------|----------------------|-------------------|
| 1    | 0                             | No                   | `[]` (empty)      |
| 2    | 1                             | No                   | `[]` (empty) βœ—    |
| 3    | 2                             | Possibly             | `[]` (empty) βœ—    |
| 10+  | 10-20                         | Yes (if sufficient history) | `[]` (empty) βœ— |

**Key Issue**: Intent agent never sees the available context data because it looks for the wrong key (`conversation_history` instead of `interaction_contexts` or `combined_context`).

## Testing Recommendations

1. **Verify Context Structure**: Log the actual context dict passed to intent agent
2. **Test Multiple Turns**: Verify context accumulates correctly over multiple interactions
3. **Test Persona Generation**: Verify user_context appears after sufficient history
4. **Compare Intent Accuracy**: Measure if fixing context access improves intent recognition accuracy

## Implementation Priority

**High Priority**: This bug prevents the intent agent from using available conversation context, which likely:
- Reduces intent recognition accuracy for follow-up questions
- Prevents context-aware intent classification
- Wastes the hierarchical context summarization system

**Recommended Fix**: Option 2 (Use `combined_context`) as it's the simplest and most comprehensive solution.