Prefer a briefing?
Prefer a briefing for your team or partners?
To speed things up: share your goals, timelines, constraints, and how you measure success today.
Planning a cohort or event?
Exploring co-branded cohorts or MOUs?
Structured capability, shared methods, and cohort learning across public institutions.
We helped design and facilitate RedLab, aligning labs under a common method, installing governance and cadence, and moving priority challenges from ideas to evidence backed delivery.
Built on MicroCanvas® v2.1 and IMM‑P® gates.

The Red de Laboratorios de Innovación, RedLab, was created under OGTIC to strengthen innovation capacity in the Dominican Republic public sector. The goal was to move from isolated efforts to a structured, scalable ecosystem where teams can design, test, and implement solutions to complex challenges.
Early hurdles included uneven methods, fragmented governance, and uncertainty about how to sustain participation across diverse institutions. RedLab needed a common framework, a clear program structure, and practical tools that build capability while delivering visible results.
Doulab partnered with OGTIC to design and facilitate Cohorts 01 and 02, anchored in the MicroCanvas Framework, MCF 2.1 and the Innovation Maturity Model Program, IMM-P®. The aim was to give public servants effective tools and a repeatable process that position RedLab as a pillar for public innovation.
RedLab in one line: clearer gates, shared language, better delivery.
Who benefits first: policy, service delivery, and digital teams that need clearer gates and faster decisions.
Social proof: cohort format and shared playbooks help teams see quick wins and reuse what works.
What RedLab implemented: operating model, cohorts, shared method, governance, and playbooks.
Governance model: RACI per initiative; weekly cadence; gate checklist per stage; decision log and risk register for traceability.
KPI tree by stage: Discovery, signal quality and interview coverage; Validation, conversion to key action and time to decision; Delivery, cycle time and escaped defects; Scale, adoption, satisfaction, and unit economics where relevant.
Examples: Discovery, interview coverage; Validation, conversion to key action; Delivery, cycle time; Scale, adoption and satisfaction.
Capability baseline at kickoff, tracked quarterly against the KPI tree to show maturity gains over time.
Operational rhythm: weekly reviews, monthly gate checks, and quarterly capability snapshots.
SLA: gate review response within five working days, with decision memo or next-evidence request.
Evidence pack: problem and assumptions, stakeholders and JTBD, experiment plan and results, artifact links, decision memo, next step.
Insights are coded into a signal library to inform next experiments and program roadmaps.
After-action reviews at the end of each cohort feed into the signal library and the next-cohort roadmap.
What changed, how decisions improved, and capability growth.
Tracked families, as of September 2025: decision latency, cycle time, adoption and satisfaction, and capability growth.
See the case diagrams below for the system flow and capability progression.
Prefer a briefing for your team or partners?
To speed things up: share your goals, timelines, constraints, and how you measure success today.
Planning a cohort or event?
Exploring co-branded cohorts or MOUs?
Start small: Discovery call → ClarityScan → Gate 1 pilot.
See more examples in case studies.
Get your baseline in 15 to 20 minutes.
No prep required.