set FINAL_VERILOG_OUTPUT_FILE "${TOP_NAME}.mapped.v"
set FINAL_SDC_OUTPUT_FILE "${TOP_NAME}.mapped.sdc"
set REPORTS_DIR "reports"
set RESULTS_DIR "results"
file mkdir ${REPORTS_DIR}
file mkdir ${RESULTS_DIR}
set_host_options -max_cores 4
set_app_var html_log_enable true
set_app_var compile_seqmap_propagate_constants true
set_app_var power_cg_auto_identify true
set_app_var hdlin_check_no_latch true
set_app_var target_library [concat $slow_db]
set_app_var link_library [concat * $slow_db $openram_db(slow)]
check_library
set_svf data/${TOP_NAME}.synthesis.svf
saif_map -start
define_design_lib WORK -path ./WORK
analyze -format ${FILE_FORMAT} -recursive -autoread $RTL_DIR -top ${TOP_NAME}
elaborate ${TOP_NAME}
current_design ${TOP_NAME}
link
create_clock -name $CLOCK_NAME -period $PERIOD [get_ports $CLOCK_NAME]
set_fix_multiple_port_nets -all -buffer_constants -feedthroughs [all_designs]
check_design -summary
check_timing
set_max_fanout $FANOUT_VALUE $TOP_NAME
set_max_transition $TRANSITION_VALUE $TOP_NAME
set_max_capacitance $CAPACITANCE_VALUE $TOP_NAME
set_dynamic_optimization $DYNAMIC_OPTIMIZATION
set_leakage_optimization $LEAKAGE_OPTIMIZATION
compile -map_effort $MAP_EFFORT -power_effort $POWER_EFFORT -area_effort $AREA_EFFORT
uniquify -force
change_names -rules verilog -hierarchy -verbose
report_qor > ${REPORTS_DIR}/qor.rpt
report_timing -max_paths 15 > ${REPORTS_DIR}/timing.rpt
report_area -nosplit > ${REPORTS_DIR}/area.rpt
report_power -nosplit > ${REPORTS_DIR}/power.rpt
write -format verilog -hierarchy -output ${RESULTS_DIR}/${FINAL_VERILOG_OUTPUT_FILE}
write_sdc -nosplit ${RESULTS_DIR}/${FINAL_SDC_OUTPUT_FILE}
write -format ddc -hierarchy -output ${RESULTS_DIR}/${TOP_NAME}.ddc
exit
Seamlessly bridge natural language with professional EDA tools through intelligent AI orchestration and unified microservice architecture.
End-to-end RTL-to-GDSII implementation with professional EDA tools. Seamlessly integrates synthesis and physical design through intelligent automation and natural language commands.
AI-powered automation for complete EDA workflows with intelligent stage coordination and parameter extraction. Natural language to GDSII orchestration with session management and context preservation.
Scalable FastAPI-based microservices with independent deployment, health monitoring, and load balancing. Distributed system design enables fault isolation and horizontal scaling for high availability.
Comprehensive CodeBLEU evaluation framework for measuring AI-generated code quality. Includes syntax matching, dataflow analysis, and semantic consistency metrics.
Intelligent AI orchestration seamlessly bridges natural language with professional EDA tools through unified microservice architecture.
Submit design requirements through conversational AI interface
AI agent analyzes requests, extracts parameters, and selects optimal tools
Translate AI decisions into tool-specific commands through Model Context Protocol
Execute professional EDA workflows using Synopsys, Cadence, and open-source tools
Get up and running in minutes with our streamlined installation process. Follow these simple steps to set up AutoEDA's 4-server microservice architecture and start automating your EDA workflows.
Download AutoEDA repository and install Python dependencies
# Clone the AutoEDA repository
git clone https://github.com/Duke-CEI-Center/AutoEDA.git
cd AutoEDA
# Create virtual environment
python3 -m venv venv
source venv/bin/activate # Linux/Mac
# venv\Scripts\activate # Windows
# Install dependencies
pip install -r requirements.txt
Check EDA tools and configure environment variables
# Verify EDA tools (required)
dc_shell -version # Synopsys Design Compiler
innovus -version # Cadence Innovus
lmstat -a # Check license servers
# Set environment variables
export OPENAI_API_KEY="your_openai_api_key_here"
export MCP_SERVER_HOST="http://localhost"
export LOG_ROOT="./logs"
Launch the 4-server microservice architecture
# Start all 4 EDA microservices
python3 src/run_server.py --server all
# Services will run on:
# Synthesis: http://localhost:18001
# Placement: http://localhost:18002
# CTS: http://localhost:18003
# Routing: http://localhost:18004
# Start AI agent (new terminal)
python3 src/mcp_agent_client.py
Run a complete design flow with natural language
# Test synthesis with natural language
curl -X POST http://localhost:8000/agent \
-H "Content-Type: application/json" \
-d '{
"user_query": "Run synthesis for design des with performance optimization",
"session_id": "quickstart"
}'
# Or try a complete RTL-to-GDSII flow
curl -X POST http://localhost:8000/agent \
-d '{
"user_query": "Run complete flow for design des",
"session_id": "quickstart"
}'
Meet the dedicated research team behind AutoEDA, working at the intersection of AI and Electronic Design Automation. Our interdisciplinary team spans multiple institutions and brings together expertise in machine learning, hardware design, and software engineering to revolutionize EDA workflows.
John Cocke Distinguished Professor
Duke University, Electrical & Computer Engineering
Assistant Professor
University of Maryland, Electrical & Computer Engineering
Undergraduate Researcher
DKU & Duke University
Undergraduate Researcher
Tsinghua & Duke University
ECE PhD Student
Duke University
ECE PhD Student
Duke University
ECE PhD Student
University of Maryland
ECE PhD Student
Duke University
We welcome collaborations from researchers, industry partners, and institutions interested in advancing AI-powered Electronic Design Automation. Whether you're working on EDA tools, machine learning applications, or hardware design, we'd love to explore potential partnerships and joint research opportunities.
Watch our comprehensive step-by-step installation guide and setup tutorials
Watch NowGet help from our active community and connect with other AutoEDA users
Join Discussions