Command Line Interface¶
Quick links:
- DSL Reference — YAML syntax for scenario definition
- Workflow Reference — analysis workflow configuration and execution
- API Reference — Python API for programmatic scenario creation
- Auto-Generated API Reference — complete class and method documentation
NetGraph provides a command-line interface for inspecting, running, and analyzing scenarios directly from the terminal.
Installation¶
The CLI is available when NetGraph is installed via pip:
Basic Usage¶
The CLI provides three primary commands:
inspect
: Analyze and validate scenario files without running themrun
: Execute scenario files and generate resultsreport
: Generate analysis reports from results files
Global options (must be placed before the command):
--verbose
,-v
: Enable debug logging--quiet
: Suppress console output (logs only)
Quick Start¶
# Inspect a scenario to understand its structure
python -m ngraph inspect my_scenario.yaml
# Run a scenario (generates my_scenario.results.json by default)
python -m ngraph run my_scenario.yaml
# Generate analysis report from results
python -m ngraph report results.json --notebook analysis.ipynb
Command Reference¶
inspect
¶
Analyze and validate a NetGraph scenario file without executing it.
Syntax:
Arguments:
scenario_file
: Path to the YAML scenario file to inspect
Options:
--detail
,-d
: Show detailed information including complete node/link tables and step parameters--help
,-h
: Show help message
What it does:
The inspect
command loads and validates a scenario file, then provides information about:
- Scenario metadata: Seed configuration and deterministic behavior
- Network structure: Node/link counts, enabled/disabled breakdown, hierarchy analysis
- Capacity statistics: Link and node capacity analysis with min/max/mean/total values
- Risk groups: Network resilience groupings and their status
- Components library: Available components for network modeling
- Failure policies: Configured failure scenarios and their rules
- Traffic matrices: Demand patterns and traffic flows
- Workflow steps: Analysis pipeline and step-by-step execution plan
In detail mode (--detail
), shows complete tables for all nodes and links with capacity and connectivity information.
Examples:
# Basic inspection
python -m ngraph inspect my_scenario.yaml
# Detailed inspection with complete node/link tables and step parameters
python -m ngraph inspect my_scenario.yaml --detail
# Inspect with verbose logging (note: global option placement)
python -m ngraph --verbose inspect my_scenario.yaml
Use cases:
- Scenario validation: Verify YAML syntax and structure
- Network debugging: Analyze blueprint expansion and node/link creation
- Capacity analysis: Review network capacity distribution and connectivity
- Workflow preview: Examine analysis steps before execution
run
¶
Execute a NetGraph scenario file.
Syntax:
Arguments:
scenario_file
: Path to the YAML scenario file to execute
Options:
--results
,-r
: Path to export results as JSON (default:<scenario_name>.results.json
)--no-results
: Disable results file generation (for edge cases)--stdout
: Print results to stdout in addition to saving file--keys
,-k
: Space-separated list of workflow step names to include in output--profile
: Enable performance profiling with CPU analysis and bottleneck detection--help
,-h
: Show help message
report
¶
Generate analysis reports from NetGraph results files.
Syntax:
Arguments:
results_file
: Path to the JSON results file (default:results.json
)
Options:
--notebook
,-n
: Output path for Jupyter notebook (default:<results_name>.ipynb
)--html
: Generate HTML report (default:<results_name>.html
if no path specified)--include-code
: Include code cells in HTML output (default: report without code)--help
,-h
: Show help message
What it does:
The report
command generates analysis reports from results files created by the run
command. It creates:
- Jupyter notebook: Interactive analysis notebook with code cells, visualizations, and explanations (default:
<results_name>.ipynb
) - HTML report (optional): Static report for viewing without Jupyter, optionally including code (default:
<results_name>.html
when--html
is used)
The report detects and analyzes the workflow steps present in the results file, creating appropriate sections and visualizations for each analysis type.
Examples:
# Generate notebook from default results.json
python -m ngraph report
# Generate notebook with custom paths
python -m ngraph report my_results.json --notebook my_analysis.ipynb
# Generate both notebook and HTML report (defaults derived from results filename)
python -m ngraph report baseline_scenario.json --html
# Generate HTML report with custom filename
python -m ngraph report results.json --html custom_report.html
# Generate HTML report without code cells
python -m ngraph report baseline_scenario.json --html
# Generate HTML report with code cells included
python -m ngraph report results.json --html --include-code
Use cases:
- Analysis documentation: Create shareable notebooks documenting network analysis results
- Report generation: Generate HTML reports for stakeholders who don't use Jupyter
- Iterative analysis: Create notebooks for further data exploration and visualization
- Presentation: Generate HTML reports for presentations and documentation
Examples¶
Basic Execution¶
# Run a scenario (creates my_network.results.json by default)
python -m ngraph run my_network.yaml
# Run a scenario and save results to custom file
python -m ngraph run my_network.yaml --results analysis.json
# Run a scenario without creating any files (edge cases)
python -m ngraph run my_network.yaml --no-results
Save Results to File¶
# Save results to a custom JSON file
python -m ngraph run my_network.yaml --results analysis.json
# Save to file AND print to stdout
python -m ngraph run my_network.yaml --results analysis.json --stdout
# Use default filename and also print to stdout
python -m ngraph run my_network.yaml --stdout
Running Test Scenarios¶
# Run one of the included scenarios with results export
python -m ngraph run scenarios/square_mesh.yaml --results results.json
Filtering Results by Step Names¶
You can filter the output to include only specific workflow steps using the --keys
option:
# Only include results from the capacity_analysis step
python -m ngraph run scenario.yaml --keys capacity_analysis --stdout
# Include multiple specific steps and save to custom file
python -m ngraph run scenario.yaml --keys build_graph capacity_analysis --results filtered.json
# Filter and print to stdout while using default file
python -m ngraph run scenario.yaml --keys capacity_analysis --stdout
The --keys
option filters by the name
field of workflow steps defined in your scenario YAML file. For example, if your scenario has:
workflow:
- step_type: BuildGraph
name: build_graph
- step_type: MaxFlow
name: capacity_analysis
# ... other parameters
Then --keys build_graph
will include only the results from the BuildGraph step, and --keys capacity_analysis
will include only the MaxFlow results.
Performance Profiling¶
Enable performance profiling to identify bottlenecks and analyze execution time:
# Run scenario with profiling
python -m ngraph run scenario.yaml --profile
# Combine profiling with results export
python -m ngraph run scenario.yaml --profile --results analysis.json
# Profile specific workflow steps
python -m ngraph run scenario.yaml --profile --keys capacity_analysis
The profiling output includes:
- Summary: Total execution time, CPU efficiency, function call statistics
- Step timing: Time spent in each workflow step with percentage breakdown
- Bottlenecks: Steps consuming >10% of total execution time
- Function analysis: Top CPU-consuming functions within bottlenecks
- Recommendations: Specific suggestions for each bottleneck
When to use profiling:
- Performance analysis during development
- Identifying bottlenecks in complex workflows
- Benchmarking before/after changes
Output Format¶
The CLI outputs results as JSON with a fixed top-level shape:
{
"workflow": { "<step>": { "step_type": "...", "execution_order": 0, ... } },
"steps": {
"build_graph": { "metadata": {}, "data": { "graph": { "graph": {}, "nodes": [...], "links": [...] } } },
"cap": { "metadata": { "iterations": 1 }, "data": { "flow_results": [ { "flows": [...], "summary": {...} } ] } }
},
"scenario": { "seed": 1, "failure_policy_set": { ... }, "traffic_matrices": { ... } }
}
- BuildGraph: stores
data.graph
in node-link JSON format - MaxFlow and TrafficMatrixPlacement: store
data.flow_results
as lists of per-iteration results (flows + summary) - NetworkStats: stores capacity and degree statistics under
data
Output Behavior¶
NetGraph CLI generates results by default for analysis workflows:
Default Behavior (Results Generated)¶
- Executes the scenario
- Logs execution progress to the terminal
- Creates results.json by default
- Shows success message with file location
Custom Results File¶
- Creates specified JSON file instead of results.json
- Useful for organizing multiple analysis runs
Print to Terminal¶
- Creates results.json AND prints JSON to stdout
- Useful for viewing results immediately while also saving them
Combined Output¶
- Creates custom JSON file AND prints to stdout
- Provides flexibility for different workflows
Disable File Generation (Edge Cases)¶
- Executes scenario without creating any output files
- Only shows execution logs and completion status
- Useful for testing, CI/CD validation, or when only logs are needed
Results default to preserving the scenario or results name, which makes downstream report
outputs consistent and easy to track.
Integration with Workflows¶
The CLI executes the complete workflow defined in your scenario file, running all steps in sequence and accumulating results. This runs complex network analysis tasks without manual intervention.
Recommended Workflow¶
- Inspect first: Always use
inspect
to validate and understand your scenario - Debug issues: Use detailed inspection to troubleshoot network expansion problems
- Run after validation: Execute scenarios after successful inspection
- Iterate: Use inspection during scenario development to verify changes
# Development workflow
python -m ngraph inspect my_scenario.yaml --detail # Validate and debug
python -m ngraph run my_scenario.yaml # Execute (creates my_scenario.results.json)
python -m ngraph report my_scenario.results.json --html # Generate my_scenario.ipynb/html
Debugging Scenarios¶
When developing complex scenarios with blueprints and hierarchical structures:
# Check if scenario loads correctly
python -m ngraph inspect scenario.yaml
# Debug network expansion issues (note: global option placement)
python -m ngraph --verbose inspect scenario.yaml --detail
# Verify workflow steps are configured correctly
python -m ngraph inspect scenario.yaml --detail | grep -A 5 "Workflow Steps"
The inspect
command will catch common issues like:
- Invalid YAML syntax
- Missing blueprint references
- Incorrect node/link patterns
- Workflow step configuration errors
- Risk group and policy definition problems