Cost Visibility
Wasted Credits
Budget Control
Real Benefits for Your Business
No more billing surprises. Every credit is tracked, justified, and automatically optimized by our intelligent engine.
Real-time dashboards, detailed team reports, and automatic alerts for optimal management of your AI subscription.
Configurable usage policies, per-user limits, and complete audit trail for your compliance needs.
Based on our actual agent engine process
🎯 Our commitment: zero hidden credits
Discover exactly how each credit is consumed in the Swiftask process.
Precise step-by-step breakdown, true to our real technical architecture.
| User Question | Your question text converted to input credits | |
| Context Sent to LLM | Context + history + optimized instructions | |
| Intermediate Response (reasoning) | Action plan generated by the LLM | |
| Swiftask Skills Call | Only if LLM processing is needed to handle results | |
| Skill Result → LLM | si LLM | Tool data integrated into context |
| Final Enriched Response | Complete context + final response generation | |
| Display to User | Simple transmission, no LLM interaction |
The technical process that optimizes your AI subscription credits

Definition: An entity that coordinates the entire process. It acts as the conductor between reasoning (LLM) and action (skills).
Receives user question
Builds a query with context for the LLM
Interprets LLM response
Calls Swiftask skills if necessary
Transmits results to LLM for enriched response
Provides final answer to user
Your complex question is transformed into credits to be used as input by the LLM.
The agent builds a prompt with your question, previous history, and additional context.
The LLM responds with intermediate reasoning and a structured action plan.
The agent engine detects that a Swiftask skill is needed to take action.
Swiftask skills execute (search, code, analysis) and return their results.
The LLM integrates reasoning + skill results to formulate the final response.
The final response is transmitted to the user with no additional LLM interaction.
The 3 key factors identified by our analysis

Context Length
The richer the context (history, instructions), the higher the credit consumption

Number of Iterations
Between the LLM and Swiftask skills based on your request complexity

Response Size
Generated at each step of the orchestration process