Warp Speed
Project Hero Image
My Role
AI Strategist & AI Architect
Company
Product School Project
Industry
Semiconductor / Custom ASIC Design
Timeline
January 2025 – Ongoing
Warp Speed
Project Hero Image
Degree
Warp Speed
School
Product School Project
Graduated
January 2025 – Ongoing
Warp Speed
Project Hero Image
Project Name
Warp Speed
My Role
AI Strategist & AI Architect
Industry
Semiconductor / Custom ASIC Design
Timeline
January 2025 – Ongoing

Overview

The Warp Speed project aimed at revolutionizing front-end RTL design workflows through AI-driven automation.

Faced with exponential growth in design complexity, an aging workforce, and increasing time-to-market pressures, we sought to scale productivity by orders of magnitude—not by adding more engineers, but by leveraging compute in the form of LLMs, LangChain automation, and RAG using LlamaIndex.

By shifting from a manual, engineer-driven process to a compute-driven, AI-augmented workflow, we enable:
• Faster chip design cycles (from a year to 3-months)
• Efficient automation of Verilog generation, compilation, verification, and documentation
• A natural language interface to automate engineering tasks

This transformation redefines engineering productivity, making chip deliverables scale as a function of compute, not headcount.

My Role

I served as the AI Strategist & Lead Architect, responsible for
• Developing the AI product strategy for integrating LLMs into RTL automation workflows
• Designing LangChain powered automation pipelines for Verilog generation, compilation, and verification
• Defining LLM inference server requirements to ensure scalability and security
• Creating human-in-the-loop validation mechanisms to maintain high accuracy

project-image
project-image

Research Methods

• Validation of LLM capabilities for front-end RTL design
• Investigation into LangChain for controlling tools and reading logs
• Benchmarking LLM-generated Verilog against human-written code
• Scalability studies (LLM utilization, parallel design efficiency)
• Research into internal private AI cloud capabilities with partners

Software and Tools

• LLM Frameworks: Open-source models (LLamaIndex, LangChain)
• Hardware Design: Verilog, ASIC design toolchains (e.g., Synopsys, Cadence)
• AI Infrastructure: Internal LLM inference servers for secure, high-context processing
• Automation & Orchestration: Containerized workflows for nightly testing

Overview

The Warp Speed project aimed at revolutionizing front-end RTL design workflows through AI-driven automation.

Faced with exponential growth in design complexity, an aging workforce, and increasing time-to-market pressures, we sought to scale productivity by orders of magnitude—not by adding more engineers, but by leveraging compute in the form of LLMs, LangChain automation, and RAG using LlamaIndex.

By shifting from a manual, engineer-driven process to a compute-driven, AI-augmented workflow, we enable:
• Faster chip design cycles (from a year to 3-months)
• Efficient automation of Verilog generation, compilation, verification, and documentation
• A natural language interface to automate engineering tasks

This transformation redefines engineering productivity, making chip deliverables scale as a function of compute, not headcount.

project-image
project-image
project-image

My Role

I served as the AI Strategist & Lead Architect, responsible for
• Developing the AI product strategy for integrating LLMs into RTL automation workflows
• Designing LangChain powered automation pipelines for Verilog generation, compilation, and verification
• Defining LLM inference server requirements to ensure scalability and security
• Creating human-in-the-loop validation mechanisms to maintain high accuracy

project-image
project-image
project-image

Software and Tools

• LLM Frameworks: Open-source models (LLamaIndex, LangChain)
• Hardware Design: Verilog, ASIC design toolchains (e.g., Synopsys, Cadence)
• AI Infrastructure: Internal LLM inference servers for secure, high-context processing
• Automation & Orchestration: Containerized workflows for nightly testing

Research Methods

• Validation of LLM capabilities for front-end RTL design
• Investigation into LangChain for controlling tools and reading logs
• Benchmarking LLM-generated Verilog against human-written code
• Scalability studies (LLM utilization, parallel design efficiency)
• Research into internal private AI cloud capabilities with partners

Outline

The Setup

Overview

The Warp Speed project aimed at revolutionizing front-end RTL design workflows through AI-driven automation.

Faced with exponential growth in design complexity, an aging workforce, and increasing time-to-market pressures, we sought to scale productivity by orders of magnitude—not by adding more engineers, but by leveraging compute in the form of LLMs, LangChain automation, and RAG using LlamaIndex.

By shifting from a manual, engineer-driven process to a compute-driven, AI-augmented workflow, we enable:
• Faster chip design cycles (from a year to 3-months)
• Efficient automation of Verilog generation, compilation, verification, and documentation
• A natural language interface to automate engineering tasks

This transformation redefines engineering productivity, making chip deliverables scale as a function of compute, not headcount.

My Role

I served as the AI Strategist & Lead Architect, responsible for
• Developing the AI product strategy for integrating LLMs into RTL automation workflows
• Designing LangChain powered automation pipelines for Verilog generation, compilation, and verification
• Defining LLM inference server requirements to ensure scalability and security
• Creating human-in-the-loop validation mechanisms to maintain high accuracy

Software and Tools

• LLM Frameworks: Open-source models (LLamaIndex, LangChain)
• Hardware Design: Verilog, ASIC design toolchains (e.g., Synopsys, Cadence)
• AI Infrastructure: Internal LLM inference servers for secure, high-context processing
• Automation & Orchestration: Containerized workflows for nightly testing

Research Methods

• Validation of LLM capabilities for front-end RTL design
• Investigation into LangChain for controlling tools and reading logs
• Benchmarking LLM-generated Verilog against human-written code
• Scalability studies (LLM utilization, parallel design efficiency)
• Research into internal private AI cloud capabilities with partners

Objectives & Constraints

Objectives:
• Automate front-end RTL design, verification, and documentation
• Reduce engineering effort per chip design
• Transition from engineering-driven to compute-driven workflows
• Accelerate customer feedback loops with overnight turnaround for any design changes
• Scale chip design deliverables beyond traditional engineering limits


Constraints:
• LLM Verilog generation accuracy must compile and pass verification
• AI-based automation pipelines must integrate with legacy toolchains
• Compute-resource scaling must be cost-effective
• Human-in-the-loop checkpoints required for mission-critical verification

Stakeholders & Collaborations

AI Strategy & Automation Team (LLM model selection, LangChain integrations)
Chip Design Engineers (Workflow validation, human-in-the-loop testing)
Product & Business Leadership (Defining ROI metrics, scaling strategies)
IT & Security (Ensuring compliance with internal IP protections)
Private AI Cloud Gurus: Inference Server Investigation and Setup, RAG reference designs

project-image
project-image
project-image

Making it Happen

Challenges & Impact

The Problem:
• Chip design cycles were slow, constrained by manual engineering workflows
• Engineers were overloaded, with increasing complexity & time-to-market demands
• Design methodologies hadn’t evolved, failing to leverage disruptive AI advancements

The Impact:
• By proposing to automate RTL workflows with AI, we gained approval for the project and the work is on going.
• I was offered a job as lead AI architect and to manage extra engineering resources allocated after we got through the crawl phase of the project.

Approach & Methodology

Step 1: AI Model Validation
• Evaluated LLM-generated Verilog for accuracy, syntax correctness, and functional equivalence
• Selected LLamaIndex for RAG to integrate past designs into AI’s knowledge base

Step 2: LangChain Workflow Integration
• Implemented automated toolchains for Verilog compilation, testing, and verification
• Set up structured log parsing to extract actionable insights from design tools

Step 3: Human-in-the-Loop Review
• Introduced human validation checkpoints for high-stakes designs
• Optimized error correction cycles by combining LLM + engineer oversight

Step 4: Scalability & Compute Efficiency
• Benchmarked compute-to-output ratios to track automation efficiency
• Designed compute-scaling workflows, ensuring cost-effective resource allocation

Execution & Implementation

• Gained access to an internal GPU inference server for testing.
• Evaluated engineering Gantt charts to assess the feasibility of getting chip lead times down to 3 months.
• Tested LLAMA LLMs to see if they could stitch together top-level Verilog net lists.
• Execution and implementation is still on going.

Outcomes & Impact

• The project proposal was approved, and we are working towards the outcomes and impact.

project-image
project-image
project-image

Conclusion

Key Takeaways & Personal Growth

• AI is no longer a tool—it’s a design partner in semiconductor workflows
• Natural language automation unlocks a paradigm shift in front-end RTL workflows
• Compute-driven design scales faster than engineering-limited approaches

Industry & Career Impact

• Warp Speed sets the foundation for AI-driven RTL workflows in high-speed chip design
• The AI-powered automation strategy is replicable across semiconductor firms
• The case study serves as a blueprint for future AI-integration in hardware design

Working Experience
R&D FW Design Engineer
Broadcom
2019 - Present
learn more
Senior RF Design Engineer
Garmin
2018 - 2018
learn more
Principle IC Design Engineer
Broadcom
2011 - 2018
learn more
Embedded Systems Engineer
Self
2014 - 2015
learn more
Associate Electrical Engineer
Logic PD
2007 - 2011
learn more
Lead Technician
GE Intelligent Platforms
2004 - 2007
learn more
Startup Technician
Self
2005 - 2006
learn more
Education Experience
GenAI & LLMs for Developers
NVIDIA
2025
learn more
AI for Product
Product School
2025
learn more
UX / UI Design
Springboard
2024
learn more
Product Management
Product School
2022
learn more
MSEE
University of Minnesota
2012
learn more
BEE
University of Minnesota
2011
learn more
AS in Electronics
Brown College
1999
learn more
Master Practitioner of NLP
iNLP Center
2020
learn more
Quantum Coch
QCA
2021
learn more