AI Development Platform with plugin architecture, Git integration, REST API, and watch mode. Supporting 14+ programming languages with method-level filtering, automatic LLM optimization, and real-time analysis. Perfect for AI-assisted development workflows.
v3.0.0 - Platform Foundation Release π
If you find this tool helpful, consider buying me a coffee! Your support helps maintain and improve this project.
context-manager.js- Main LLM context analysis script with exact token counting.contextignore- Files to exclude from token calculation (EXCLUDE mode).contextinclude- Files to include in token calculation (INCLUDE mode)README.md- This documentation fileREADME-tr.md- Turkish documentation (TΓΌrkΓ§e dokΓΌmantasyon)
- π Plugin Architecture - Modular, extensible system for languages and exporters
- π Git Integration - Analyze only changed files, diff analysis, author tracking
- ποΈ Watch Mode - Real-time file monitoring and auto-analysis
- π REST API - HTTP server for programmatic access (6 endpoints)
- β‘ Performance - Caching system, parallel processing (5-10x faster)
- ποΈ Modular Core - Scanner, Analyzer, ContextBuilder, Reporter
- π§ Interactive Wizard Mode - User-friendly guided setup (default)
- π» CLI Mode - Traditional command-line interface (--cli flag)
- π€ Interactive export - Prompts for export choice when no options specified
- β Exact token counting using tiktoken (GPT-4 compatible)
- π Multi-language support - 14+ languages: JavaScript, TypeScript, Python, PHP, Ruby, Java, Kotlin, C#, Go, Rust, Swift, C/C++, Scala
- π― Method-level analysis - Analyze tokens per function/method
- π Detailed reporting - by file type, largest files, statistics
- π« Dual ignore system - respects both
.gitignoreand context ignore rules - π Include/Exclude modes -
.contextincludetakes priority over.contextignore - π Method filtering -
.methodincludeand.methodignorefor granular control - π― Core application focus - configured to analyze only essential code files
- π€ LLM context export - generate optimized file lists for LLM consumption
- π Clipboard integration - copy context directly to clipboard
- πΎ JSON/YAML/CSV/XML exports - multiple format options
- π GitIngest format - Single-file digest for LLM consumption (inspired by GitIngest)
- π― TOON format - Ultra-compact format (40-50% token reduction)
- π Dual context modes - compact (default) or detailed format
# Launch interactive wizard (guides you through options)
context-managerThe wizard provides a user-friendly interface to:
- Select your use case (Bug Fix, Feature, Code Review, etc.)
- Choose target LLM (Claude, GPT-4, Gemini, etc.)
- Pick output format (TOON, JSON, YAML, etc.)
Note: Wizard mode uses Ink terminal UI. If you experience visual artifacts, use CLI mode with --cli flag.
# Use CLI mode instead of wizard
context-manager --cli
# CLI mode with options
context-manager --cli --save-report
context-manager --cli --context-clipboard
context-manager --cli --gitingest
context-manager --cli --method-level
# Any analysis flag automatically enables CLI mode
context-manager -s # Auto CLI (save report)
context-manager -m # Auto CLI (method-level)
context-manager --context-export # Auto CLI (export)
# Combine multiple exports
context-manager --cli -g -s # GitIngest digest + detailed report# Auto-detect LLM from environment
export ANTHROPIC_API_KEY=sk-...
context-manager # Automatically optimizes for Claude
# Explicit model selection
context-manager --target-model claude-sonnet-4.5
context-manager --target-model gpt-4o
context-manager --target-model gemini-2.0-flash
# List all supported models
context-manager --list-llms
# Context fit analysis
context-manager --cli --target-model claude-sonnet-4.5
# Output:
# π Context Window Analysis:
# Target Model: Claude Sonnet 4.5
# Available Context: 200,000 tokens
# Your Repository: 181,480 tokens
# β
PERFECT FIT! Your entire codebase fits in one context.Supported LLM models (9+ models):
- Anthropic: Claude Sonnet 4.5, Claude Opus 4
- OpenAI: GPT-4 Turbo, GPT-4o, GPT-4o Mini
- Google: Gemini 1.5 Pro, Gemini 2.0 Flash
- DeepSeek: DeepSeek Coder, DeepSeek Chat
Custom models supported via .context-manager/custom-profiles.json
# Analyze only uncommitted changes
context-manager --changed-only
# Analyze changes since a commit/branch
context-manager --changed-since main
context-manager --changed-since HEAD~5
context-manager --changed-since v2.3.0
# With author information
context-manager --changed-only --with-authors
# Output:
# π Git Integration - Analyzing Changed Files
# ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
#
# π Found 3 changed files
# Impact: MEDIUM (score: 25)# Start watch mode
context-manager watch
# With method-level analysis
context-manager watch -m
# Custom debounce (default: 1000ms)
context-manager watch --debounce 2000
# Output:
# ποΈ Watch mode active
# π File change: src/server.js
# β
Analysis complete: 12,450 tokens (45ms)
# π Total: 64 files, 181,530 tokens# Start API server
context-manager serve
# Custom port and authentication
context-manager serve --port 8080 --auth-token my-secret-token
# API Endpoints:
# GET /api/v1/analyze - Full project analysis
# GET /api/v1/methods - Extract methods from file
# GET /api/v1/stats - Project statistics
# GET /api/v1/diff - Git diff analysis
# POST /api/v1/context - Smart context generation
# GET /api/v1/docs - API documentation
# Example API calls:
curl http://localhost:3000/api/v1/analyze
curl http://localhost:3000/api/v1/methods?file=src/server.js
curl http://localhost:3000/api/v1/diff?since=main# Using the NPM package globally
context-manager
context-manager --save-report
context-manager --context-clipboardContext Manager includes real-world test repositories for validation:
# Express.js test repo (git submodule)
cd test-repos/express
# Run full test suite
context-manager --cli -m --target-model claude-sonnet-4.5
# Git integration test
context-manager --changed-since v5.0.0
# Watch mode test
context-manager watchSee test-repos/README.md for complete testing guide.
Complete manual testing guide available at docs/MANUAL-TESTING-v3.0.md
Includes:
- β 50+ test scenarios for all v3.0.0 features
- β API endpoint validation
- β Git integration testing
- β Watch mode validation
- β Performance benchmarks
# Run all tests
npm run test:comprehensive
# Run v3.0.0 specific tests
npm run test:v3
npm run test:git
npm run test:plugin
npm run test:api
npm run test:watchThe tool is configured to focus on core application logic only:
- Core MCP server implementation (
utility-mcp/src/) - Authentication and security layers
- Request handlers and routing
- Transport protocols and communication
- Utilities and validation logic
- Configuration management
- Error handling and monitoring
- Documentation files (
.md,.txt) - Configuration files (
.json,.yml) - Infrastructure and deployment files
- Testing and script directories
- Build artifacts and dependencies
- Workflow orchestration files (
utility-mcp/src/workflows/**) - Testing utilities (
utility-mcp/src/testing/**) - All non-essential supporting files
# Interactive analysis with export selection
context-manager
# Quiet mode (no file listing)
context-manager --no-verbose
# With detailed JSON report
context-manager --save-report
# Generate LLM context file list
context-manager --context-export
# Copy context directly to clipboard
context-manager --context-clipboardWhen you run the tool without specifying export options (--save-report, --context-export, or --context-clipboard), it will automatically prompt you to choose an export option after the analysis:
# Run analysis and get prompted for export options
context-manager
# The tool will show:
# π€ Export Options:
# 1) Save detailed JSON report (token-analysis-report.json)
# 2) Generate LLM context file (llm-context.json)
# 3) Copy LLM context to clipboard
# 4) No export (skip)
#
# π€ Which export option would you like? (1-4):This interactive mode ensures you never miss the opportunity to export your analysis results in the format you need.
The token calculator supports two complementary filtering modes:
- Default mode when only
.contextignoreexists - Includes all files except those matching ignore patterns
- Traditional gitignore-style exclusion logic
- Priority mode - when
.contextincludeexists,.contextignoreis ignored - Includes only files matching include patterns
- More precise control for specific file selection
- Perfect for creating focused analysis sets
- If
.contextincludeexists β INCLUDE mode (ignore.contextignore) - If only
.contextignoreexists β EXCLUDE mode - If neither exists β Include all files (respect
.gitignoreonly)
# EXCLUDE mode: Include everything except patterns in .contextignore
rm .contextinclude # Remove include file
context-manager
# INCLUDE mode: Include only patterns in .contextinclude
# (automatically ignores .contextignore)
context-managercontext-manager --help--save-report,-s- Save detailed JSON report--no-verbose- Disable file listing (verbose is default)--context-export- Generate LLM context file list (saves as llm-context.json)--context-clipboard- Copy LLM context directly to clipboard--detailed-context- Use detailed context format (8.6k chars, default is compact 1.2k)--help,-h- Show help message
The token calculator can generate optimized file lists for LLM consumption, with two format options:
- Size: ~2.3k characters (structured JSON)
- Content: Project metadata and organized file paths without token counts
- Format: Identical to llm-context.json file - complete JSON structure
- Perfect for: LLM consumption, programmatic processing, structured data needs
- Usage:
--context-clipboardor--context-export
- Size: ~8.6k characters (comprehensive)
- Content: Full paths, categories, importance scores, directory stats
- Perfect for: Initial project analysis, comprehensive documentation
- Usage:
--detailed-context --context-clipboard
- Smart file selection - Top files by token count and importance
- Directory grouping - Common prefix compression saves space
- Token abbreviation - "12k" instead of "12,388 tokens"
- Extension removal - ".js" removed to save characters
- Cross-platform clipboard - Works on macOS, Linux, and Windows
- Multiple output formats - JSON file or clipboard ready text
# Generate minimal LLM context and save to llm-context.json (2.3k chars JSON)
context-manager --context-export
# Copy minimal context directly to clipboard (2.3k chars JSON - identical to file)
context-manager --context-clipboard
# Copy detailed context to clipboard (8.6k chars)
context-manager --detailed-context --context-clipboard
# Combine with regular analysis
context-manager --save-report --context-clipboardCompact Format (JSON - 2.3k chars):
{
"project": {
"root": "cloudstack-go-mcp-proxy",
"totalFiles": 64,
"totalTokens": 181480
},
"paths": {
"utility-mcp/src/server/": [
"CloudStackUtilityMCP.js"
],
"utility-mcp/src/handlers/": [
"workflow-handlers.js",
"tool-handlers.js",
"analytics-handler.js"
],
"utility-mcp/src/utils/": [
"security.js",
"usage-tracker.js",
"cache-warming.js"
]
}
}Detailed Format (8.6k chars):
# cloudstack-go-mcp-proxy Codebase Context
**Project:** 64 files, 181,480 tokens
**Core Files (Top 20):**
1. `utility-mcp/src/server/CloudStackUtilityMCP.js` (12,388 tokens, server)
2. `utility-mcp/src/handlers/workflow-handlers.js` (11,007 tokens, handler)
...
**All Files:**
```json
[{"path": "file.js", "t": 1234, "c": "core", "i": 85}]
Use Cases
Compact Format (2.3k chars JSON):
- LLM Integration - Structured data for AI assistants with complete project context
- Programmatic Processing - JSON format for automated tools and scripts
- Context Sharing - Identical format in clipboard and file exports
- Development Workflows - Consistent structure for CI/CD and automation
Detailed Format (8.6k chars):
- Architecture Planning - Comprehensive project overview for major decisions
- New Team Member Onboarding - Complete codebase understanding
- Documentation Generation - Full project structure analysis
- Code Review Preparation - Detailed file relationships and importance
General Use Cases:
- Development workflow integration
- CI/CD pipeline context generation
- Automated documentation updates
- Project health monitoring
Context-manager now supports generating GitIngest-style digest files - a single, prompt-friendly text file perfect for LLM consumption.
GitIngest format consolidates your entire codebase into a single text file with:
- Project summary and statistics
- Visual directory tree structure
- Complete file contents with clear separators
- Token count estimates
This format is inspired by GitIngest, implemented purely in JavaScript with zero additional dependencies.
# Standard workflow - analyze and generate digest in one step
context-manager --gitingest
context-manager -g
# Combine with other exports
context-manager -g -s # digest.txt + token-analysis-report.json
# Two-step workflow - generate digest from existing JSON (fast, no re-scan)
context-manager -s # Step 1: Create report
context-manager --gitingest-from-report # Step 2: Generate digest
# Or from LLM context
context-manager --context-export # Step 1: Create context
context-manager --gitingest-from-context # Step 2: Generate digest
# With custom filenames
context-manager --gitingest-from-report my-report.json
context-manager --gitingest-from-context my-context.jsonWhy use JSON-based digest?
- β‘ Performance: Instant digest generation without re-scanning
- π Reusability: Generate multiple digests from one analysis
- π¦ Workflow: Separate analysis from export steps
- π― Flexibility: Use different JSON sources for different purposes
The generated digest.txt file looks like:
Directory: my-project
Files analyzed: 42
Estimated tokens: 15.2k
Directory structure:
βββ my-project/
βββ src/
β βββ index.js
β βββ utils.js
βββ README.md
================================================
FILE: src/index.js
================================================
[complete file contents here]
================================================
FILE: src/utils.js
================================================
[complete file contents here]
- Single File: Everything in one file for easy LLM ingestion
- Tree Visualization: Clear directory structure
- Token Estimates: Formatted as "1.2k" or "1.5M"
- Sorted Output: Files sorted by token count (largest first)
- Filter Compatible: Respects all
.gitignoreand context ignore rules
- LLM Context Windows: Paste entire codebase as single context
- Code Reviews: Share complete project snapshot
- Documentation: Single-file project reference
- AI Analysis: Perfect for ChatGPT, Claude, or other LLMs
- Archival: Simple project snapshot format
Context-manager implements GitIngest format v0.3.1. See docs/GITINGEST_VERSION.md for implementation details and version history.
The .contextignore file is pre-configured for core application analysis:
# Current focus: Only core JS files in utility-mcp/src/
# Excludes:
**/*.md # All documentation
**/*.json # All configuration files
**/*.yml # All YAML files
infrastructure/** # Infrastructure code
workflows/** # Workflow definitions
docs/** # Documentation directory
token-analysis/** # Analysis tools themselves
utility-mcp/scripts/** # Utility scripts
utility-mcp/src/workflows/** # Workflow JS files
utility-mcp/src/testing/** # Testing utilitiesThe .contextinclude file provides precise file selection:
# Include only core JavaScript files
# This should produce exactly 64 files
# Include main entry point
utility-mcp/index.js
# Include all src JavaScript files EXCEPT workflows and testing
utility-mcp/src/**/*.js
# Exclude specific subdirectories (using negation)
!utility-mcp/src/workflows/**
!utility-mcp/src/testing/**For EXCLUDE mode (edit .contextignore):
# Remove lines to include more file types
# Add patterns to exclude specific files
# Example: Include documentation
# **/*.md <- comment out or remove this line
# Example: Exclude specific large files
your-large-file.js
specific-directory/**For INCLUDE mode (create .contextinclude):
# Include specific files or patterns
src/**/*.js # All JS files in src
config/*.json # Config files only
docs/api/**/*.md # API documentation only
# Use negation to exclude from broad patterns
src/**/*.js
!src/legacy/** # Exclude legacy code
!src/**/*.test.js # Exclude test files.gitignore(project root) - Standard git exclusions (always respected).contextinclude(token-analysis/) - INCLUDE mode (highest priority).contextignore(token-analysis/) - EXCLUDE mode (used when no include file).contextignore(project root) - Fallback EXCLUDE mode location
For exact token counting, install tiktoken:
npm install tiktokenWithout tiktoken, the tool uses smart estimation (~95% accuracy).
π― PROJECT TOKEN ANALYSIS REPORT
================================================================================
π Total files analyzed: 64
π’ Total tokens: 181,480
πΎ Total size: 0.78 MB
π Total lines: 28,721
π Average tokens per file: 2,836
π« Files ignored by .gitignore: 11,912
π Files ignored by calculator rules: 198
π BY FILE TYPE:
--------------------------------------------------------------------------------
Extension Files Tokens Size (KB) Lines
--------------------------------------------------------------------------------
.js 64 181,480 799.8 28,721
π TOP 5 LARGEST FILES BY TOKEN COUNT:
--------------------------------------------------------------------------------
1. 12,388 tokens (6.8%) - utility-mcp/src/server/CloudStackUtilityMCP.js
2. 11,007 tokens (6.1%) - utility-mcp/src/handlers/workflow-handlers.js
3. 7,814 tokens (4.3%) - utility-mcp/src/utils/security.js
4. 6,669 tokens (3.7%) - utility-mcp/src/handlers/tool-handlers.js
5. 5,640 tokens (3.1%) - utility-mcp/src/ci-cd/pipeline-integration.js
Perfect for LLM context window optimization:
- 181k tokens = Core application logic only
- Clean analysis = No noise from docs, configs, or build files
- Focused development = Essential code for AI-assisted development
- Context efficiency = Maximum useful code per token
- Dual mode flexibility = Precise include/exclude control
- Ultra-minimal export = 1k chars (89% reduction) for frequent AI interactions
- Detailed export = 8.6k chars for comprehensive analysis when needed
You can integrate this tool into:
- CI/CD pipelines for code size monitoring
- Pre-commit hooks for token budget checks
- Documentation generation workflows
- Code quality gates
- LLM context preparation workflows
- Development environment setup
- INCLUDE mode active: Remove
.contextincludeto use EXCLUDE mode - Wrong files included: Check if
.contextincludeexists (takes priority) - Mode confusion: Use verbose mode to see which mode is active
- Ensure no inline comments in ignore/include pattern files
- Use file patterns (
docs/**) instead of directory patterns (docs/) - Test specific patterns with verbose mode
- Check pattern syntax:
**for recursive,*for single level
- Too high: Review included files with verbose mode, add exclusion patterns
- Too low: Check if important files are excluded, review patterns
- Inconsistent: Verify which mode is active (include vs exclude)
- Check if files are excluded by
.gitignore(always respected) - Verify calculator ignore/include patterns
- Ensure files are recognized as text files
- Use verbose mode to see exclusion reasons
LLM context manager with method-level filtering and token optimization. The ultimate tool for AI-assisted development.
Created by HakkΔ± SaΔdΔ±Γ§
β
File-level token analysis - Analyze entire files and directories
π§ Method-level analysis - Extract and analyze specific methods from JavaScript/TypeScript/Rust/C#/Go/Java
π Dual filtering system - Include/exclude files and methods with pattern matching
π LLM context optimization - Generate ultra-compact context for AI assistants
π― Exact token counting - Uses tiktoken for GPT-4 compatible counts
π€ Multiple export formats - JSON reports, clipboard, file exports
π¦ NPM package - Use programmatically or as global CLI tool
π Pattern matching - Wildcards and regex support for flexible filtering
β‘ Performance optimized - 36% smaller codebase with enhanced functionality
# Local installation
npm install @hakkisagdic/context-manager
# Global installation
npm install -g @hakkisagdic/context-manager
# Run globally
context-manager --help# Clone and use directly
git clone <repository>
cd token-analysis
node token-calculator.js --help# Interactive analysis with export selection
context-manager
# File-level analysis with clipboard export
context-manager --context-clipboard
# Method-level analysis
context-manager --method-level --context-export
# Analysis with reports
context-manager --method-level --save-report --verbose# Focus on specific methods only
echo "calculateTokens\nhandleRequest\n*Validator" > .methodinclude
context-manager --method-level
# Exclude test methods
echo "*test*\n*debug*\nconsole" > .methodignore
context-manager --method-level --context-clipboard# Basic analysis
context-manager
# Method-level analysis
context-manager --method-level
# Save detailed report
context-manager --save-report
# Copy context to clipboard
context-manager --context-clipboard
# Combine options
context-manager --method-level --save-report --verboseconst { TokenAnalyzer } = require('@cloudstack/context-manager');
// Basic file-level analysis
const analyzer = new TokenAnalyzer('./src', {
methodLevel: false,
verbose: true
});
// Method-level analysis
const methodAnalyzer = new TokenAnalyzer('./src', {
methodLevel: true,
saveReport: true
});
analyzer.run();Priority Order:
.gitignore(project root) - Standard git exclusions (always respected).contextinclude- INCLUDE mode (highest priority for files).contextignore- EXCLUDE mode (fallback for files)
.contextinclude - Include only these files:
# Include only core JavaScript files
utility-mcp/src/**/*.js
!utility-mcp/src/testing/**
!utility-mcp/src/workflows/**.contextignore - Exclude these files:
# Exclude documentation and config
**/*.md
**/*.json
node_modules/**
test/
**/*.test.js
**/*.spec.js.methodinclude - Include only these methods:
# Core business logic methods
calculateTokens
generateLLMContext
analyzeFile
handleRequest
validateInput
processData
# Pattern matching
*Handler # All methods ending with 'Handler'
*Validator # All methods ending with 'Validator'
*Manager # All methods ending with 'Manager'
TokenCalculator.* # All methods in TokenCalculator class.methodignore - Exclude these methods:
# Utility and debug methods
console
*test*
*debug*
*helper*
print*
main
# File-specific exclusions
server.printStatus
utils.debugLog| Pattern | Description | Example |
|---|---|---|
methodName |
Exact match | calculateTokens |
*pattern* |
Contains pattern | *Handler matches requestHandler |
Class.* |
All methods in class | TokenCalculator.* |
file.method |
Specific file method | server.handleRequest |
!pattern |
Negation (exclude) | !*test* |
Use case: General codebase analysis, file organization
{
"project": {
"root": "my-project",
"totalFiles": 64,
"totalTokens": 181480
},
"paths": {
"src/core/": ["server.js", "handler.js"],
"src/utils/": ["helper.js", "validator.js"]
}
}Use case: Focused analysis, debugging specific methods, LLM context optimization
{
"project": {
"root": "my-project",
"totalFiles": 64,
"totalTokens": 181480
},
"methods": {
"src/server.js": [
{"name": "handleRequest", "line": 15, "tokens": 234},
{"name": "validateInput", "line": 45, "tokens": 156}
],
"src/utils.js": [
{"name": "processData", "line": 12, "tokens": 89}
]
},
"methodStats": {
"totalMethods": 150,
"includedMethods": 23,
"totalMethodTokens": 5670
}
}Use case: Comprehensive analysis, CI/CD integration, historical tracking
{
"metadata": {
"generatedAt": "2024-01-15T10:30:00.000Z",
"projectRoot": "/path/to/project",
"gitignoreRules": ["node_modules/**", "*.log"],
"calculatorRules": ["src/**/*.js", "!src/test/**"]
},
"summary": {
"totalFiles": 64,
"totalTokens": 181480,
"byExtension": {".js": {"count": 64, "tokens": 181480}},
"largestFiles": [...]
},
"files": [...]
}| Option | Short | Description |
|---|---|---|
--save-report |
-s |
Save detailed JSON report |
--verbose |
-v |
Show included files and directories |
--context-export |
Generate LLM context file | |
--context-clipboard |
Copy context to clipboard | |
--method-level |
-m |
Enable method-level analysis |
--help |
-h |
Show help message |
Goal: Generate minimal context for AI assistants
# Ultra-compact method-level context
code-analyzer --method-level --context-clipboard
# Focus on core business logic only
echo "handleRequest\nprocessData\nvalidateInput" > .methodinclude
code-analyzer --method-level --context-exportResult: 89% smaller context compared to full codebase
Goal: Understand project complexity and structure
# Analysis with detailed reports
code-analyzer --save-report --verbose
# Track largest files and methods
code-analyzer --method-level --save-reportGoal: Focus on specific problematic methods
# Debug authentication methods only
echo "*auth*\n*login*\n*validate*" > .methodinclude
code-analyzer --method-level --context-clipboard
# Exclude test and debug methods
echo "*test*\n*debug*\nconsole\nlogger" > .methodignore
code-analyzer --method-levelGoal: Monitor codebase growth and complexity
# Daily token analysis for monitoring
code-analyzer --save-report > reports/analysis-$(date +%Y%m%d).json
# Check method complexity trends
code-analyzer --method-level --save-reportGoal: Ensure code stays within token budgets
# Check if codebase exceeds LLM context limits
TOKENS=$(code-analyzer --context-export | jq '.project.totalTokens')
if [ $TOKENS -gt 100000 ]; then
echo "Codebase too large for LLM context!"
exit 1
fi| Option | Short | Description | Example |
|---|---|---|---|
--save-report |
-s |
Save detailed JSON report | context-manager -s |
--verbose |
-v |
Show included files/methods | context-manager -v |
--context-export |
Generate LLM context file | context-manager --context-export |
|
--context-clipboard |
Copy context to clipboard | context-manager --context-clipboard |
|
--method-level |
-m |
Enable method-level analysis | context-manager -m |
--help |
-h |
Show help message | context-manager -h |
# Quick analysis with interactive export
context-manager
# Method-level analysis with all outputs
context-manager --method-level --save-report --context-export --verbose
# LLM-optimized context generation
context-manager --method-level --context-clipboard
# CI/CD monitoring
context-manager --save-report --context-export
# Development debugging
context-manager --method-level --verboseconst { TokenAnalyzer } = require('@hakkisagdic/context-manager');
// File-level analysis
const analyzer = new TokenAnalyzer('./src', {
verbose: true,
saveReport: true
});
analyzer.run();const { TokenAnalyzer, MethodAnalyzer } = require('@hakkisagdic/code-analyzer');
// Method-level analysis with custom filtering
const analyzer = new TokenAnalyzer('./src', {
methodLevel: true,
contextExport: true,
verbose: false
});
analyzer.run();
// Extract methods from specific file
const methodAnalyzer = new MethodAnalyzer();
const methods = methodAnalyzer.extractMethods(fileContent, 'server.js');const analyzer = new TokenAnalyzer('./src', {
// Enable method-level analysis
methodLevel: true,
// Output options
saveReport: true,
contextExport: true,
contextToClipboard: true,
// Verbosity
verbose: true,
// Compact context (for LLM optimization)
compactContext: true
});
// Access results
analyzer.run();
console.log('Analysis complete!');const { MethodAnalyzer, MethodFilterParser } = require('@hakkisagdic/context-manager');
// Create custom method filter
const filter = new MethodFilterParser(
'./custom-methods.include',
'./custom-methods.ignore'
);
// Analyze specific file
const methodAnalyzer = new MethodAnalyzer();
const methods = methodAnalyzer.extractMethods(content, filePath);
// Filter methods
const filteredMethods = methods.filter(method =>
filter.shouldIncludeMethod(method.name, fileName)
);- Node.js: >= 14.0.0
- tiktoken: ^1.0.0 (optional, for exact token counts)
MIT License - see LICENSE file for details
- Fork the repository
- Create your feature branch
- Add tests for new functionality
- Submit a pull request
- π Report Issues
- π Documentation
- π¬ Discussions
Created with β€οΈ by HakkΔ± SaΔdΔ±Γ§

