DARE Decision-Making and AI: Preserving Human Wisdom in an Algorithmic World

We're hearing it everywhere: "Jump on the AI wagon or perish!" The urgency is in the air. Companies scrambling to implement AI solutions. Professionals racing to upskill. Everyone afraid of being left behind or worse, loosing their job and relevance.

This feels familiar. I remember a similar panic around computers and "the internet" in the 80s and 90s. "Learn to use a computer or become obsolete." "Get online or get left behind." The fear of technological irrelevance pushed everyone to accelerate their learning curves. And they were right the digital revolution was a game changer.

Back in 2002, I was fresh out of college at the Honduran Central Bank. I joined the statistics team working on transitioning the GDP base year from 1990 to 2000. I was lucky—I'd taken an Excel course that suddenly became my lifeline. Everything lived in spreadsheets, and I often wondered how analysts managed the previous transition from 1978 to 1990 without computers.

The answer: handwritten calculations, manual processes, and methodical thinking with no "undo" button. The spreadsheets had literal cutting and pasting.

Even in the 2000s, we had assistants who took stenography, drafted correspondence, and managed research. Then email revolutionized everything—suddenly we all became our own assistants.

Now, watching colleagues interact with AI, I realize we've come full circle. We're getting assistants again—this time, algorithmic ones.

A few weeks ago, I came across a McKinsey article about building healthier teams. While exploring different decision-making approaches, the authors discussed moving beyond traditional frameworks like RACI to something more precise: DARE.

The DARE framework brings clarity to group decisions by clearly defining roles:

Deciders make the final call and own the outcomes Advisors provide expert opinions and guidance Recommenders analyze options and synthesize insights Execution Stakeholders implement decisions in the real world

As I read about DARE, designed for teams with multiple stakeholders and competing voices, a thought struck me:

What if this is exactly what we need for AI interaction?

As I started reading about this approach, I realized how naturally it maps to AI interaction:

  • I've learned to stay firmly in the Decider seat. It's tempting to let AI make the call, but I've found that it may not end well. You set objectives, weigh all input, and make the final call. Before you even open ChatGPT, you decide what decision needs to be made and what success looks like.

  • AI excels as your Advisor. Here's where these tools shine—providing expert perspectives, exploring angles you hadn't considered, offering data and analysis on demand. Use AI to challenge your assumptions, present counterarguments, dive deep into complexity.

  • You must remain the Recommender. This is where the magic happens—and where many people abdicate responsibility. You synthesize AI insights with human wisdom, consult colleagues and mentors, consider contexts that algorithms can't grasp. You bridge the gap between AI's broad intelligence and your specific reality.

  • You own Execution completely. You implement decisions, monitor results, adjust course when needed. You understand the human impact, the cultural nuances, the relationship dynamics that determine whether good decisions become good outcomes.

AI should only occupy the Advisor seat. When we let it creep into Recommender or Execution roles, we surrender the judgment that only comes from lived experience.

Those of us who learned to think before computers had no choice but to develop certain mental muscles. We researched in libraries, consulted encyclopedias, and interviewed experts because that's how you got reliable information. We thought through problems thoroughly because errors were expensive to fix. We consulted multiple people because collaboration required intentional effort.

But what about those who grew up with Google, who learned to work in a world of abundant information and easy iteration? And what about those of us who've forgotten the old ways? They bring incredible strengths—comfort with complexity, rapid pattern recognition, fearless experimentation.

We may need to deliberately cultivate the analytical friction that previous generations acquired accidentally.

With AI at our fingertips, why bother with the hard work of thinking first? That's exactly the trap we need to avoid. Here are practices to maintain your analytical edge:

  • The "Source Before Speed" practice. Before asking AI anything, spend five minutes writing what you think the answer might be. When AI responds, compare. Notice where your instincts aligned or diverged. This builds independent thinking before algorithmic influence kicks in.

  • The "Steel Man" approach. Always ask AI to argue against its own recommendations. "What's the strongest case against this approach? What could go wrong? What am I not considering?" Train yourself to never accept first outputs.

  • The "Three Voice Minimum". AI counts as one perspective. Always find two more: a human expert, a different AI system, traditional research, or historical precedent. Cross-reference relentlessly.

  • The "No Undo" test. Since digital work culture encourages rapid iteration and constant revision, force this question: "What if I couldn't take this back?" Map the human consequences. Consider relationships, reputation, resources you can't recover.

But why keep AI strictly in the Advisor role?  Even the most sophisticated AI carries blind spots that matter:

  • Context collapse. AI sees patterns but misses the specific cultural, timing, and relationship factors that make your situation unique.

  • Hallucination risk. AI can generate convincing but incorrect information, especially about recent events or specialized domains.

  • Bias inheritance. Every AI system carries the biases of its training data and creators, often in ways we can't detect.

  • Consequence blindness. AI understands logical outcomes but not emotional, political, or cultural ripple effects that determine real-world success.

History offers an important lesson about technological transitions... the promises rarely match the messy reality.

We heard similar promises about computers: 'They'll make work easier and free up your time.' But while technology did transform work, the people who stayed relevant weren't just those who learned to use computers—they were the ones who learned to work with them strategically.

Some people did lose jobs to automation. Others became more valuable because they understood how to combine human judgment with technological capability.

Now we're hearing the same promises about AI: 'It will handle routine tasks so you can focus on what matters.' But again, simply using AI won't guarantee relevance. The key will be using it intentionally—knowing when to rely on it and when to assert human judgment.

The outcome depends entirely on how clearly we define what belongs to humans and what belongs to machines.

Choosing to maintain human agency in an AI world requires intention and effort. It would be easier to let algorithms decide, to accept their first suggestions, to trust their confidence without question.

But meaningful work has never been about taking the easiest path. It's about making choices that honor what matters—preserving the uniquely human capacity for wisdom, judgment, and responsibility that no algorithm can replicate.

The choices we make about AI today will determine whether future generations maintain the capacity for independent thought and ethical reasoning, or become dependent on algorithmic guidance for decisions that require human wisdom.

What's your experience navigating AI while maintaining human judgment? Have you found frameworks or practices that help you stay grounded in your decision-making?

Next
Next

The Gifts My Father Left Behind Part 4: The Purpose That Calls Us Home