The UK has over 8,000 registered software development agencies. About 200 of them are genuinely excellent. The rest range from competent to catastrophic — and from the outside, they all look remarkably similar. The agency that delivered a beautiful case study for another client may be the wrong fit for your project, your team, and your timeline.

Failed software projects cost UK businesses an estimated £1.2 billion per year. The majority of that waste is not caused by bad luck or technical complexity — it is caused by poor agency selection at the start. Businesses pick on price, on presentation, or on a referral that does not account for scope differences. This guide tells you exactly how to avoid that trap.

Before You Start: Know What You Are Actually Buying

Software development is not a commodity. Two agencies quoting for the same project brief may have fundamentally different interpretations of what they are delivering. Before you evaluate anyone, define these three things internally:

  • What does success look like? Not in terms of features — in terms of business outcomes. "Reduce manual invoicing time by 80%" is a success criterion. "Build an invoicing module" is a feature request. Agencies that understand business outcomes produce better results than those executing feature lists.
  • What is your internal capacity for project involvement? You will need to provide requirements, review designs, test features, and make decisions throughout the project. If you cannot allocate 5–10 hours per week of internal time, no agency can compensate for that gap.
  • What is your actual budget — not your target budget? Agencies calibrate their proposals to the budget you give them. If you quote a lower number hoping to negotiate, you will get a lower-scope proposal that does not do what you need. Tell agencies your real budget range and ask them what they can deliver within it.

The 12 Questions to Ask Every Agency You Shortlist

Question 1: Can you show me a project at a similar scale and complexity to mine?

What you are testing: Whether their portfolio is genuinely relevant to your project. An agency that has built marketing websites and e-commerce stores may not have the architecture experience for a complex B2B SaaS platform.

Good answer: A specific project with a comparable technical stack, similar feature complexity, and a client you can contact for a reference.

Red flag: Vague portfolio descriptions, case studies without specific technical detail, or "we work across all sectors and scales" without a directly comparable example.

Question 2: Who specifically will work on my project, and can I meet them?

What you are testing: Whether the team you meet in the sales process is the team that actually builds your product. This is one of the most common sources of post-contract disappointment in agency relationships.

Good answer: Named individuals, their CV or LinkedIn profile, and an offer to arrange a technical conversation with the lead developer.

Red flag: "We have a team of X developers available" without naming them. This indicates a bench-staffing model where whoever is available gets assigned to your project after signing.

Question 3: How do you handle scope changes?

What you are testing: Whether the agency has a mature process for managing inevitable requirement changes, or whether they treat every change as a renegotiation opportunity.

Good answer: A defined change control process: changes are assessed, priced, and approved before implementation. Small changes may be absorbed within sprint buffer; larger changes are formal change orders with timeline and cost implications documented.

Red flag: "We try to be flexible" (no process) or "every change will be a change order" (no flexibility at all). Both extremes create problems.

Question 4: What is your testing process?

What you are testing: Whether the agency has a QA culture built into their development process, or whether "testing" means the developer checks their own work before shipping it to you.

Good answer: Dedicated QA engineers, unit and integration test coverage standards, a staging environment that mirrors production, and a structured UAT process before any release.

Red flag: "We do manual testing." "Our developers test their own code." No mention of test coverage or a structured QA process.

Question 5: What happens if a key developer leaves mid-project?

What you are testing: Whether the agency has a resilient delivery model or whether your project is dependent on one person staying.

Good answer: Cross-training on codebases, thorough documentation practices, a bench of developers who can be onboarded, and a contractual commitment to maintain delivery continuity.

Red flag: Hesitation or vagueness. If the agency cannot give you a clear answer, they probably have not thought about it — and your project has a single point of failure.

Question 6: What is your approach to documentation?

What you are testing: Whether you will receive a maintainable codebase at the end of the project, or an undocumented system that creates long-term dependency on the agency.

Good answer: Architecture documentation, API documentation, inline code comments for complex logic, a README that lets a new developer understand the system, and deployment runbooks.

Red flag: "The code is the documentation." "We focus on shipping, not writing docs." Undocumented code is a future cost — either you pay to document it later, or you pay the original agency to maintain it because no-one else can understand it.

Question 7: Who owns the code and IP on completion?

What you are testing: Whether you have full legal ownership of everything produced, or whether there are licences, shared ownership clauses, or open-source dependencies with restrictive terms.

Good answer: Full IP assignment to you upon payment in final invoicing, an explicit statement about any third-party libraries used and their licences, and no ongoing licence fees or dependencies on agency proprietary tooling.

Red flag: Vague answers, a contract that retains agency licence rights to any component, or no clear IP clause at all.

Question 8: How do you handle security in development?

What you are testing: Whether the agency builds security in from the start (OWASP standards, secure coding practices, penetration testing) or treats it as an afterthought.

Good answer: OWASP Top 10 as a baseline for web applications, secure data handling, encrypted storage for sensitive data, and either internal security review or access to third-party penetration testing.

Red flag: "We follow industry best practices" without specifics. "We can look at security after launch if needed." Security retrofitted after build is always more expensive and less effective than security built in from the start.

Question 9: What is your communication cadence during a project?

What you are testing: Whether you will have regular, structured visibility into progress — or whether you will send emails into a void and get updates only when something is wrong.

Good answer: Weekly progress reports, sprint demos every 1–2 weeks, a dedicated project manager or account manager as your point of contact, and access to the project management tool (Jira, Linear, etc.) with real-time visibility.

Red flag: "We will keep you updated." No defined cadence, no structured demo process, no project management tool access.

Question 10: What does your post-launch support look like?

What you are testing: Whether the agency has a structured handover process and ongoing support model, or whether your relationship ends at launch.

Good answer: A defined warranty period (typically 30–90 days) covering bug fixes at no cost, a documented handover including deployment guides and architectural notes, and an optional support retainer for ongoing maintenance.

Red flag: "We can quote for additional work after launch." No warranty period. No handover documentation. A post-launch relationship that is entirely on-demand with no SLA.

Question 11: Can you provide three client references I can speak to?

What you are testing: Whether the agency's reputation survives scrutiny beyond curated testimonials.

Good answer: Three contacts, willingness for you to call them without the agency present, and references that are actually relevant to your project scale and type.

Red flag: Written testimonials only. References that are "unavailable" or require an agency introduction. Clutch reviews as the substitute for real reference conversations. One reference conversation is worth more than twenty written testimonials.

Question 12: What is your process if the project runs over budget?

What you are testing: Whether the agency has a mature process for managing budget risk, or whether overruns will be handled reactively and at your expense.

Good answer: Transparent budget tracking throughout the project, an escalation process when spend approaches 80% of the agreed budget, and clear contractual terms about what triggers an overrun conversation vs what is absorbed.

Red flag: "That will not happen with us." Budget overruns are normal in complex software projects — agencies that deny this are either inexperienced or misleading you.

Red Flags That Predict Project Failure

Beyond the 12 questions, watch for these warning signs during the proposal and contract process:

  • A proposal delivered in under 48 hours for a complex project — this indicates they copy-pasted a template rather than thinking through your requirements
  • A fixed-price quote with no assumptions documented — every fixed-price quote has assumptions. If they are not written down, the agency will use them against you when scope ambiguity arises
  • Pressure to sign quickly — "This team slot fills up fast" is a sales tactic, not a genuine constraint. Good agencies plan capacity; they do not auction team slots to whoever signs first
  • No questions about your business or users — an agency that jumps straight to technology without understanding your business context will build the wrong thing efficiently
  • Lowest price in the shortlist by more than 25% — software development costs are driven primarily by hours. A significantly lower quote means fewer hours — which means either less scope or compromised quality

A Simple Scoring Framework to Compare Agencies

Once you have run your shortlist through the 12 questions, score each agency on these five dimensions (1–5, where 5 is excellent):

DimensionWeightWhat to Score
Technical credibility30%Portfolio relevance, technical interview quality, team seniority
Process maturity25%QA approach, change management, communication structure
Team stability20%Named team members, resilience to staff changes, documentation practices
Commercial fairness15%IP ownership clarity, warranty terms, transparent pricing
Reference quality10%Relevance of references, candour of reference conversations

Multiply each score by its weight and sum them. The highest total is your most recommendable agency — adjusted for any hard-line red flags that disqualify a candidate regardless of overall score.

Why SevenSolvers Passes These 12 Questions

We built this guide from what we have seen go wrong for UK businesses that came to us after a failed agency relationship. Every question above reflects a real failure mode we have had to help clients recover from.

At SevenSolvers, our standard engagement includes: a named project team you meet before signing, weekly progress reports and fortnightly sprint demos, GDPR-compliant development practices, full IP assignment on completion, a 60-day warranty period, and reference contacts for projects comparable to yours.

We do not win every pitch. But every client who chooses us knows exactly who will build their project, how it will be managed, and what they will own at the end.

Start the conversation at sevensolvers.com/contact.

Frequently Asked Questions

Should I choose a UK-based agency or consider offshore options?

UK-based agencies offer time zone alignment, simpler contract law, and often stronger accountability — but at higher rates. Offshore agencies offer cost savings of 40–65% but require more management overhead. A hybrid model — UK project management with offshore delivery — often delivers the best balance. Use the same 12 questions regardless of where the agency is based.

How long should I spend evaluating agencies before choosing?

For a project above £30,000, a 3–4 week evaluation period is appropriate: one week for initial outreach and brief sharing, one week for proposal receipt, one week for follow-up questions and reference calls, one week for final negotiation. Rushing this process is one of the most common causes of poor agency selection.

What should a software development proposal include?

A complete proposal should include: scope description, assumptions, technical approach, team composition (named), project timeline, milestone payment schedule, IP ownership terms, post-launch support terms, and a risk register. Any proposal missing more than two of these is incomplete.