A 200-page tender, 10 days, four people, and a calendar that wouldn't move.
Forge competes for public-sector construction tenders that arrive as 150–250 page PDFs with a deadline measured in working days. Their pre-sales team — two estimators, one proposal lead and one technical reviewer — would lose 8–10 business days on each one.
The bottleneck wasn't engineering. It was parsing: pulling out 300+ requirements from the tender, mapping them to past responses, double-checking compliance against scoring criteria, and assembling a consistent answer matrix. The actual technical and pricing work was the easy part — and got squeezed.
We mapped one tender end-to-end before writing a single line of code.
In a 90-minute kickoff we walked through the most recent tender with the proposal lead. By the end of the call we had a process map with 14 distinct steps, the average time per step, and three clear bottlenecks: requirements extraction (3 days), past-answer search (2 days), and compliance matrix validation (1.5 days).
“The estimators were essentially doing 30 hours of copy-paste before they even started costing the project.”PROPOSAL LEAD, FORGE
Day three we proposed the smallest possible automation: a requirements extractor + an answer-reuse engine over the last 24 months of submitted bids. Everything downstream (pricing, technical descriptions, attachments) stayed manual.
18 days from kickoff to a working prototype on real data.
The prototype had three pieces. A document parser that ingested tender PDFs and produced a structured JSON of requirements with cross-references to scoring sections. A retrieval layer over Forge's 187 historical bids, scored by semantic similarity and weighted by recent win rate. And a review UI inside their existing SharePoint workspace — no new tools to learn.
Crucially: every AI-generated answer carried a citation link to the source bid section. The proposal lead could audit every line in two clicks. That single design choice is what made the team trust the output enough to ship.
6 weeks, 4 live tenders, one signed framework agreement.
Forge ran the prototype on 4 real tenders during the pilot. Average prep time fell from 9.4 days to 2.1 days — a 78% reduction. The proposal lead spent the freed-up time on technical narrative and pricing strategy, areas where human judgement actually moves the win rate.
In week 5 of the pilot, Forge won a €2.4M framework agreement they had previously deprioritized because the prep cost wasn't worth the expected hit rate. The unit-economics shift was the moment management greenlit a full rollout.
Numbers that survived the pilot.
- Prep time per tender9.4 days → 2.1 days−78%
- Tenders capacity~14/yr → ~32/yr+128%
- Estimator hours / tender38 → 9−76%
- Annual saving (people + opportunity)—€212k
- Time to first ROI—6 weeks
Forge is now scaling the same pattern into pricing and contract review.
After the pilot, Forge expanded scope to two adjacent workflows: bid pricing assembly (running model risk scenarios in seconds instead of hours) and contract review for awarded tenders. Both reuse the same retrieval and citation infrastructure.
The TL;DR for any company looking at a similar workflow: pick the smallest extraction-and-reuse loop, build it on your real data with citations from day one, and measure prep-time and win-rate before scaling. We can help.