What Boards Get Wrong About Technical Due Diligence (And How to Fix It)
In over two decades of building, leading, and advising technology companies, I've been on both sides of the due diligence table โ as the leader whose technology was being scrutinised, and as the advisor conducting the assessment. The gap between what most boards think technical due diligence covers and what it actually needs to cover is alarmingly wide.
Most due diligence processes fixate on code quality and architecture but miss the signals that really matter: team capability, technical culture, and product-market alignment. Here's what I see boards getting wrong, and how to do it better.
The code review trap
The default approach to tech DD is to send in a senior developer or architect to review the codebase. They'll assess code quality, test coverage, architecture patterns, and technical debt. They'll produce a report with a Red/Amber/Green rating and a list of remediation recommendations.
This is necessary but wildly insufficient. I've seen companies with impeccable codebases that were building the wrong product. I've seen companies with messy, pragmatic code that were generating enormous value because they understood their market and could iterate faster than their competitors.
A clean codebase tells you the team has good engineering discipline. It tells you almost nothing about whether the technology will generate returns.
The code review should be one input into a much broader assessment. If it's the centrepiece of your technical DD, you're optimising for the wrong signal.
What matters more than code
After conducting technical due diligence across sectors including IoT, SaaS, wearable technology, and e-commerce, the factors I've found most predictive of post-investment technology success are:
Team capability and depth. Can the existing team deliver the product roadmap without the founder writing code at weekends? Is there genuine engineering leadership, or is one person the single point of failure for all technical decisions? What happens if the CTO leaves? I always map the team against the roadmap and look for gaps that would require immediate hiring.
Technical culture and decision-making. How does the team make architectural decisions? Is there a culture of documentation, review, and knowledge sharing? Or does all institutional knowledge live in one person's head? The way a team operates under pressure tells you far more than their tech stack choices.
Product-market alignment of the technology. Is the architecture designed for the market the company is actually pursuing? I've seen startups build enterprise-grade platforms when they're selling to SMEs, and vice versa. The technology needs to match the go-to-market reality โ not the founder's aspirational vision of what the company might become in five years.
Certification and compliance posture. For hardware and regulated-industry software, the state of certifications, compliance documentation, and ongoing regulatory obligations is critical. Missing or expiring certifications can represent six-figure remediation costs and months of delay. In sectors like payments (PCI-DSS), aviation, and oil and gas (ATEX), this can be a deal-breaking discovery.
Red flags I look for immediately
Bus factor of one โ a single person who is the only one who understands how the system works, deploys it, or can fix critical issues.
No deployment automation โ manual deployment processes signal a team that hasn't invested in operational maturity, and often correlate with reliability issues.
Undocumented third-party dependencies โ particularly for hardware projects, unknown licensing obligations or single-source component dependencies can create significant post-acquisition risk.
Certification gaps โ products shipping without current certifications, or certifications that will need renewal with significant design changes.
Roadmap-team mismatch โ an ambitious roadmap with a team that's already fully committed to maintenance and support of the existing product.
The hardware dimension
Most technical DD frameworks were designed for software companies. When the target company has a hardware product โ electronics, IoT devices, wearables โ the assessment needs to expand significantly.
Hardware introduces questions around bill of materials (BOM) cost and stability, supply chain resilience, component lifecycle risk, manufacturing yield rates, warranty and field failure data, and certification maintenance. These are domains where software-focused DD practitioners frequently lack the expertise to make informed judgements.
At a minimum, hardware DD should assess the maturity of the design for manufacture (DFM) process, the relationship with the contract manufacturer, and whether the company has adequate control over its own IP โ particularly schematics, Gerber files, and firmware source code. I've encountered situations where a company's critical hardware IP was held by a third-party design house with no clear contractual assignment.
How to structure effective technical DD
My recommended approach structures the assessment across four pillars:
Pillar 1: Technology assessment. Architecture review, code quality, technical debt quantification, scalability analysis, and security posture. This is the traditional DD scope โ necessary, but not sufficient.
Pillar 2: Team and culture assessment. Team structure mapping against the product roadmap, key-person dependency analysis, hiring plan feasibility, technical decision-making processes, and engineering culture indicators.
Pillar 3: Product and market alignment. Technology-strategy fit analysis, competitive technical positioning, IP defensibility assessment, and roadmap feasibility given current resources and architecture.
Pillar 4: Operational and compliance review. Deployment and operational maturity, monitoring and incident response capability, certification and compliance status, third-party dependency audit, and data governance posture.
Each pillar should result in specific, actionable findings โ not just a traffic-light rating. The output should help the board understand not just what the risks are, but what it would cost and how long it would take to remediate them. That's the information that actually informs investment decisions.
Getting the right assessor
The person conducting technical DD needs to have built and led technology teams themselves โ not just reviewed code. They need to understand the commercial context, the operational reality, and the human factors that determine whether a technology organisation will thrive or struggle post-investment.
For companies with electronics and hardware products, the assessor needs direct experience with hardware product development โ from design through certification to manufacturing. The failure modes and risk factors for hardware businesses are fundamentally different from pure software, and getting this wrong can be extremely costly.
Boards that invest in comprehensive, multi-dimensional technical due diligence make better decisions. The cost of a thorough assessment is trivial compared to the cost of discovering critical technical risks six months after completion.