The Artist Within: AI Requires Fluidity, Not Formality

AI Chain-of-Thought favors logic over creativity, limiting sparks of innovation.

by · Psychology Today
Reviewed by Davia Sills

Key points

  • Chain-of-Thought (CoT) helps LLMs break down problems logically, but this can hinder creative flow.
  • Creative tasks thrive on flexibility and spontaneity, where rigid CoT logic struggles to keep up.
  • For creativity endeavors, LLMs should embrace randomness and exploration, unlocking deeper innovation.
Source: Art: DALL-E/OpenAI

Beyond the advancements in LLMs, a new three-letter acronym is gaining attention—CoT, or Chain-of-Thought. CoT is a method where AI models mimic human-like step-by-step reasoning to tackle complex problems.

By breaking down these problems into intermediate steps, CoT enables LLMs to excel in areas like math, logic, and symbolic reasoning. However, when applied to creative tasks—such as writing fiction, generating innovative ideas, or designing unconventional solutions—CoT’s structured approach may not be the ideal choice.

In a recent paper, researchers conducted a meta-analysis covering over 100 studies and ran experiments on 20 datasets using 14 contemporary LLMs. Some of the models included were Llama 2, Llama 3.1, Mistral 7B, and Claude 3, among others. Their findings are interesting: CoT indeed boosts performance, but mainly in tasks that rely on structured reasoning, such as math or formal logic. However, CoT’s value diminishes significantly when applied to creative tasks, which are often more fluid, abstract, and open-ended.

Why Chain-of-Thought Works Well—For Some Tasks

Chain-of-Thought prompting essentially mimics a step-by-step, human-like reasoning process. Imagine solving a complex math problem: You wouldn’t jump straight to the answer; you would break the problem into smaller, manageable steps. CoT prompts nudge LLMs to do the same, delivering structured, intermediate reasoning that mirrors how humans approach formal problems.

For example, if asked a question involving math or symbolic manipulation, an LLM utilizing CoT would outline its approach in steps: first, defining the problem, then performing operations or transformations, and finally, providing the solution. This structured breakdown is precisely why CoT performs so well on tasks that involve logical deduction, formal planning, or symbolic reasoning.

In this analysis, CoT did not improve performance on datasets involving commonsense reasoning or abstract thinking, and CoT showed little separation in performance compared to direct-answer prompting, emphasizing that CoT is less effective in creative or intuitive reasoning tasks.​

The vast majority of performance gains from CoT come from datasets related to math or symbolic reasoning. In fact, when models generated answers with mathematical symbols like the “=” sign, CoT’s performance was significantly better than a direct-answer approach. This makes sense—CoT excels when the task benefits from logical structuring and precision.

Where CoT Falls Short—The Creative Arena

While CoT shines in structured tasks, its very strength—methodical step-by-step reasoning—becomes a limitation in creative tasks that require spontaneity and abstract thinking. Creative endeavors—whether it’s writing a compelling story, brainstorming innovative business ideas, or designing an unconventional artistic concept—are often non-linear. These tasks thrive on ambiguity, fluidity, and the ability to think beyond established frameworks. CoT, with its rigid structure, may stifle this creative freedom by forcing models to follow a strict path, limiting their ability to make spontaneous, innovative leaps.

In creative tasks, the most valuable ideas often emerge from unexpected connections, serendipitous insights, or intuitive leaps. Unlike math problems with a clear path from problem to solution, creativity embraces uncertainty and allows for exploration beyond conventional boundaries. CoT’s structured, linear approach simply doesn’t accommodate this level of abstraction and flexibility, making it less effective for open-ended, imaginative work.

Creativity Requires Fluidity, Not Formality

Let’s consider an example: writing a story. If you were to task a CoT-driven LLM with generating a short story, the model might over-structure its response. It could start by outlining the plot, defining characters, and moving logically through the narrative, but this approach may feel formulaic or forced. True creative writing often involves the freedom to pivot, allowing characters or events to evolve organically, with new and unexpected ideas emerging throughout the process.

Rigid structuring, while valuable for logical tasks, can constrain the imaginative flow. A more fluid, less step-by-step method may be better suited for creative tasks, where flexibility is key to producing content that feels natural and innovative. In these scenarios, LLMs that emphasize spontaneity and open-ended generation may outperform those relying on CoT. These models might be designed to balance coherence with the ability to explore less conventional routes, creating responses that feel more inspired and dynamic.

Moving Beyond CoT for Creative Work

So, how do we move beyond CoT when tackling creative tasks with LLMs? This paper suggests that CoT is far from the final answer for reasoning in LLMs. In fact, the authors highlight the need for more advanced approaches, such as models that incorporate search-based methods, interacting agents, or fine-tuned architectures tailored for specific domains.

For creativity, this could mean developing LLMs that focus on generating content in bursts, using random sampling, or embracing less predictable pathways. Models fine-tuned for artistic endeavors might also rely more on reinforcement learning from human feedback (RLHF) to grasp what feels fresh, innovative, and emotionally resonant. Tools that allow LLMs to collaborate in creative processes rather than strictly reason through them could unlock entirely new levels of expression.

The Artist Within

While Chain-of-Thought remains a powerful tool for formal reasoning, its limitations in creative fields are increasingly evident. Creativity requires fluidity, spontaneity, and the ability to make surprising connections—qualities that rigid step-by-step reasoning might suppress. As we explore the boundaries of what LLMs can achieve, it’s crucial to develop models that embrace the chaos and beauty of creative thought.

The artist within LLMs might not rely on structure or order but rather on the freedom to roam beyond the constraints of logic, finding inspiration in the unexpected. The future of creative AI will likely lie in models that think more like artists and less like mathematicians.

The authors offer a clear message: CoT is not a one-size-fits-all solution. While it excels in domains like math and logic, creativity demands a more nuanced, open-ended approach. By moving beyond CoT, we can unlock the true potential of LLMs in artistic and innovative endeavors—where the beauty lies not in the steps but in the journey.