Thinking Outside the Bot: How AI Fuels Creativity
Expanding Intellectual Capacity: Idea Exploration and Co-Creating on Demand (Part 4 of 5)
BLOG
Rick Hamilton
8/4/20257 min read
Idea Exploration and Co-creating on Demand
For centuries, successful innovators have sought out thinking partners. Isaac Newton was so prolific in his correspondence with other scientists and thinkers that 1,344 of his letters were compiled into an extensive seven-volume edition in the mid-twentieth century. Likewise, Charles Darwin was constantly in touch with both leading scientists and amateur collectors and naturalists. In modern times, innovators continue to find great value in working alongside trusted others – consider luminaries such as Bill Hewlett and David Packard, Steve Jobs and Steve Wozniak, and Bill Gates and Paul Allen. Innovators seek out thinking partners for a variety of reasons. Across history, reasons to collaborate often involved gathering data, observations, and perspectives, testing and refining ideas, and eventually building support for those ideas. Even today, our ideas best withstand the scrutiny of real-world conditions and fierce critics if we’ve first refined them through back-and-forth dialogue with other knowledgeable parties.
Innovating Alone
When working alone, we're trapped within our own knowledge boundaries and thinking patterns. Psychologists call this "fixation," where our initial ideas create mental ruts that prevent us from exploring alternative directions. Group brainstorming, despite its promise of diverse perspectives, often falls victim to social dynamics: dominant voices overshadowing quieter contributors, groupthink leading everyone toward safe consensus, and the pressure for immediate responses leaving little time for deep reflection. Even when we overcome these social barriers, we're still limited by the collective knowledge and experiences present in the room. We need ways to break free from our cognitive echo chambers while maintaining the iterative, exploratory nature that makes brainstorming effective. This is what has pushed innovators to seek out others with whom they can share ideas, iteratively refining their concepts in the process of “co-creation.”
Idea exploration, or co-creation, is a key high-value use case for AI models. This entails sharing goals and ideas with the model, and requesting feedback and suggestions. But be aware that doing this effectively requires multiple queries, where the user drills down on particular areas of interest for more detail.
Real-world Success
One successful example is drawn from my own experience: I was scheduled to moderate a panel on regulatory considerations for AI use in medical devices. While somewhat familiar with this subject, I lacked the expertise of the proposed panelists -- clinicians, start-up founders, and attorneys who’d breathed this topic for years. Since my best thinking typically happens while walking (away from electronic distractions), I grabbed a clipboard and pen and spent thirty minutes on a nearby nature trail, taking notes on possible discussion topics. Returning to the office and scrutinizing the results, I had created a half-dozen high-level topics for the panel, for example “FDA approval requirements,” “dealing with sensitive personal health information,” among others.
But six high-level topics would not be enough for an hour-long panel. Turning to diverse AI models such as Gemini, ChatGPT, and Claude, I provided the essentials: panel context, my areas of expertise and those of the panelists, goals, and the six initial seed topics. The AI model responses were both expansive and nuanced, with sharper phrasings, deeper insights, and an occasional contrarian twist. While my initial list had implied the question, “how do you deal with sensitive personal health information in training regulated devices,” AI models decomposed this into more detailed questions to engage panelists and audiences at a deeper level, such as:
How do current regulatory frameworks like HIPAA, GDPR, and FDA guidance shape your organization’s approach to collecting, storing, and processing personal health information (PHI) in AI-enabled medical devices?
How effective are current de-identification or anonymization techniques in your view, particularly for complex multimodal data (e.g., imaging, sensor data, and EHR text)? Are there scenarios where re-identification risks are still high?
Regarding patient consent, do existing models (e.g., broad vs. dynamic consent) sufficiently cover the ongoing, adaptive nature of AI/ML systems? How should this evolve?
As AI models evolve post-deployment (e.g., through real-world learning), how do you ensure that new data inputs or updates don't introduce new privacy vulnerabilities?
What changes do you anticipate in regulatory expectations around PHI as AI tools become more autonomous, predictive, and embedded in care pathways?
With several minutes of prompting and combining the best inputs from multiple AI models, my panel preparation now included many more questions, as well as much greater nuance and subtlety. Without revealing my brainstorming partner, I reveled in my new mastery of this complex topic. When the session arrived, I felt less like a novice surrounded by experts than I did a conductor orchestrating a symphony of perspectives on AI/ML regulation.
A Brainstorming Framework
Many of us have experienced the “flow” state of co-creating with a human collaborator. At its best, we’re with a trusted colleague before a white board, each building on the other’s ideas until we have a meaningful breakthrough. Today, we have new collaborators available to us, and we should each develop the habit of using AI models as sounding boards to expand our thinking. As they become increasingly sophisticated, AI’s capabilities have evolved rapidly from something akin to an undergraduate intern, to a graduate student, and increasingly a PhD-level peer with whom we can share ideas and receive meaningful feedback. Their advantages lie in a combination of patience, range, and speed. They entertain endless iterations, respond from multiple vantage points, and remix ideas almost instantaneously. Further, working with them on idea exploration is like riding a high-speed elevator. One minute, you can be strategically surveying a complex subject from 30,000 feet, with the model helping you grasp the C-suite perspectives on a problem. The next minute, you can push the “basement” button and zoom down into the details and minutiae of idea implementation. This can be disorienting, until we learn how to use it wisely.
There’s no single way to use AI models to co-create; getting ideas for your child’s birthday party and investigating weaknesses in your business plan require different mindsets, data, and prompts. That said, the following steps may be used to effectively drive a concept forward.
Context setting. Provide as much detail as you can concerning your goals, constraints, existing knowledge, and the specific type of thinking you need. This aligns the model’s response more closely with your expectations and needs.
Exploratory prompting. Present your initial ideas or challenges, and ask for expansion, alternatives, or novel angles. The key here is to treat this as a conversation with a colleague, not as a simple “prompt-once-and-walk-away” query. Go deep.
Perspective shifting. In some cases, you’ll benefit by having the model take on different personas. E.g., in the “regulatory” example above, I could have asked questions about the problem from the standpoints of the startup founder, the attorney, or the clinician. You’ll often find subtle (or sometimes not-so-subtle) differences in the responses returned.
Iterative refinement. Take the most promising ideas and deep dive on them with the model. Ask for variations, improvements, or combinations of ideas to tease out the best possible outcomes for your needs.
Importantly, recognize that you are firmly in the driver’s seat. Prioritize your own critical thinking, recognizing that AI is your partner – not your boss. Use your expertise, intuition, and understanding of real-world contexts to synthesize the model outputs and take ownership of the actionable next steps.
Let the Model Fight Back
We’ve seen that AI models can generate concepts or features based on prompts, acting as a digital ideation partner. They also excel at providing contrarian perspectives and proposing alternative angles when we share a plan. Further, they can do so without the social friction that accompanies human disagreement. When we’re deep in a project or committed to a particular approach, we naturally throw ourselves into executing on our plan, but doing so leads to blind spots – assumptions we no longer question, risks which we minimize, and alternatives we dismiss too quickly. Here, AI's ability to adopt opposing viewpoints becomes invaluable. By explicitly prompting models to "argue against this approach" or "identify the strongest case for why this might fail," we receive rigorous criticism without the interpersonal complexity of human relationships. Although AI models are increasingly sycophantic, flattering your ideas and proclaiming your wisdom, you can let the model know up-front that you want honest, objective criticisms.
The key to effective contrarian prompting lies in specificity: rather than asking "what's wrong with this idea," try something like, "assume you're a competitor trying to exploit weaknesses in this strategy. What would you target?" Such approaches generate more targeted, actionable criticism. AI models can simultaneously hold multiple opposing perspectives, allowing them to raise concerns from different stakeholder viewpoints, e.g., regulatory, financial, operational, or user experience, that you might not naturally consider from your particular role or expertise.
Making AI Collaboration Stick
In terms of aiding innovation, think of AI as raising you up, providing a more strategic or holistic view of the problem you’re trying to solve. In turn, they elevate our innovative processes by allowing us to see multiple perspectives on a problem. To be successful, start with clear objectives, meld human intelligence and intuition with the AI outputs, and develop a culture of experimentation. Too many people will ask a model a simple question, be dissatisfied with the response, and walk away. If the answer you’re getting is unacceptable, the problem may lie in the way you’re asking the question. Learn and grow, with AI as a sounding board, ideation partner, and even as a provider of contrarian perspectives to help you find your blind spots. Try these routines on a challenging problem, at work or at home, this week and see how you can improve your own effectiveness and problem-solving abilities.
About the Author
With a background in artificial intelligence/machine learning (AI/ML), cloud computing, and internet of things (IoT) technologies, Rick Hamilton is a named inventor on more than 1,060 issued US patents, making him one of the most prolific inventors in world history – just behind Thomas Edison. He has more than 30 years of patent portfolio development and governance experience, and 13 years of portfolio usage and organizational strategy experience. This includes establishing and leading patent strategy for a Fortune 10 healthcare company. He has spoken on artificial intelligence/machine learning, innovation and IP management, cloud computing, and IoT technologies in 32 countries, and has trained thousands of technical and business staff on best invention practices.
Rick can be reached at rick@hamiltonandboss.com with questions or comments.
Ready to secure your strategic advantage?
Visit us at hamiltonandboss.com to learn more or schedule a consultation.
Follow us on LinkedIn and X | Contact: info@hamiltonandboss.com